00:00:00.000 Started by upstream project "autotest-per-patch" build number 132097 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.111 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.189 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.201 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.213 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.213 > git config core.sparsecheckout # timeout=10 00:00:05.223 > git read-tree -mu HEAD # timeout=10 00:00:05.238 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.256 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.256 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.366 [Pipeline] Start of Pipeline 00:00:05.381 [Pipeline] library 00:00:05.383 Loading library shm_lib@master 00:00:05.383 Library shm_lib@master is cached. Copying from home. 00:00:05.397 [Pipeline] node 00:00:05.406 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.407 [Pipeline] { 00:00:05.416 [Pipeline] catchError 00:00:05.417 [Pipeline] { 00:00:05.426 [Pipeline] wrap 00:00:05.433 [Pipeline] { 00:00:05.438 [Pipeline] stage 00:00:05.439 [Pipeline] { (Prologue) 00:00:05.648 [Pipeline] sh 00:00:05.933 + logger -p user.info -t JENKINS-CI 00:00:05.950 [Pipeline] echo 00:00:05.951 Node: CYP9 00:00:05.959 [Pipeline] sh 00:00:06.259 [Pipeline] setCustomBuildProperty 00:00:06.269 [Pipeline] echo 00:00:06.271 Cleanup processes 00:00:06.276 [Pipeline] sh 00:00:06.584 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.584 2925622 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.605 [Pipeline] sh 00:00:06.888 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.888 ++ grep -v 'sudo pgrep' 00:00:06.888 ++ awk '{print $1}' 00:00:06.888 + sudo kill -9 00:00:06.888 + true 00:00:06.900 [Pipeline] cleanWs 00:00:06.907 [WS-CLEANUP] Deleting project workspace... 00:00:06.907 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.914 [WS-CLEANUP] done 00:00:06.917 [Pipeline] setCustomBuildProperty 00:00:06.928 [Pipeline] sh 00:00:07.214 + sudo git config --global --replace-all safe.directory '*' 00:00:07.284 [Pipeline] httpRequest 00:00:07.619 [Pipeline] echo 00:00:07.621 Sorcerer 10.211.164.101 is alive 00:00:07.628 [Pipeline] retry 00:00:07.630 [Pipeline] { 00:00:07.640 [Pipeline] httpRequest 00:00:07.645 HttpMethod: GET 00:00:07.645 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.646 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.655 Response Code: HTTP/1.1 200 OK 00:00:07.655 Success: Status code 200 is in the accepted range: 200,404 00:00:07.656 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.150 [Pipeline] } 00:00:11.168 [Pipeline] // retry 00:00:11.176 [Pipeline] sh 00:00:11.466 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.483 [Pipeline] httpRequest 00:00:12.180 [Pipeline] echo 00:00:12.183 Sorcerer 10.211.164.101 is alive 00:00:12.192 [Pipeline] retry 00:00:12.195 [Pipeline] { 00:00:12.209 [Pipeline] httpRequest 00:00:12.213 HttpMethod: GET 00:00:12.214 URL: http://10.211.164.101/packages/spdk_f0e4b91ff9acfa9329867542582368c40bc525b9.tar.gz 00:00:12.214 Sending request to url: http://10.211.164.101/packages/spdk_f0e4b91ff9acfa9329867542582368c40bc525b9.tar.gz 00:00:12.235 Response Code: HTTP/1.1 200 OK 00:00:12.235 Success: Status code 200 is in the accepted range: 200,404 00:00:12.235 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f0e4b91ff9acfa9329867542582368c40bc525b9.tar.gz 00:01:06.568 [Pipeline] } 00:01:06.586 [Pipeline] // retry 00:01:06.593 [Pipeline] sh 00:01:06.880 + tar --no-same-owner -xf spdk_f0e4b91ff9acfa9329867542582368c40bc525b9.tar.gz 00:01:10.197 [Pipeline] sh 00:01:10.484 + git -C spdk log --oneline -n5 00:01:10.485 f0e4b91ff nvme/rdma: Add likely/unlikely to IO path 00:01:10.485 51bde6628 nvme/rdma: Factor our contig request preparation 00:01:10.485 07416b7ee lib/rdma_provider: Allow to set data_transfer cb 00:01:10.485 1794c395e nvme/rdma: Allocate memory domain in rdma provider 00:01:10.485 a4c634476 bdev/nvme: Fix race between IO channel creation and reconnection 00:01:10.496 [Pipeline] } 00:01:10.509 [Pipeline] // stage 00:01:10.517 [Pipeline] stage 00:01:10.519 [Pipeline] { (Prepare) 00:01:10.534 [Pipeline] writeFile 00:01:10.548 [Pipeline] sh 00:01:10.834 + logger -p user.info -t JENKINS-CI 00:01:10.847 [Pipeline] sh 00:01:11.133 + logger -p user.info -t JENKINS-CI 00:01:11.145 [Pipeline] sh 00:01:11.430 + cat autorun-spdk.conf 00:01:11.430 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.430 SPDK_TEST_NVMF=1 00:01:11.430 SPDK_TEST_NVME_CLI=1 00:01:11.430 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.430 SPDK_TEST_NVMF_NICS=e810 00:01:11.430 SPDK_TEST_VFIOUSER=1 00:01:11.430 SPDK_RUN_UBSAN=1 00:01:11.430 NET_TYPE=phy 00:01:11.438 RUN_NIGHTLY=0 00:01:11.443 [Pipeline] readFile 00:01:11.467 [Pipeline] withEnv 00:01:11.469 [Pipeline] { 00:01:11.481 [Pipeline] sh 00:01:11.892 + set -ex 00:01:11.892 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.892 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.892 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.892 ++ SPDK_TEST_NVMF=1 00:01:11.892 ++ SPDK_TEST_NVME_CLI=1 00:01:11.892 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.892 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.892 ++ SPDK_TEST_VFIOUSER=1 00:01:11.892 ++ SPDK_RUN_UBSAN=1 00:01:11.892 ++ NET_TYPE=phy 00:01:11.892 ++ RUN_NIGHTLY=0 00:01:11.892 + case $SPDK_TEST_NVMF_NICS in 00:01:11.892 + DRIVERS=ice 00:01:11.892 + [[ tcp == \r\d\m\a ]] 00:01:11.892 + [[ -n ice ]] 00:01:11.892 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.892 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.892 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.892 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.892 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.892 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.892 + true 00:01:11.892 + for D in $DRIVERS 00:01:11.892 + sudo modprobe ice 00:01:11.892 + exit 0 00:01:11.903 [Pipeline] } 00:01:11.916 [Pipeline] // withEnv 00:01:11.921 [Pipeline] } 00:01:11.933 [Pipeline] // stage 00:01:11.941 [Pipeline] catchError 00:01:11.943 [Pipeline] { 00:01:11.956 [Pipeline] timeout 00:01:11.956 Timeout set to expire in 1 hr 0 min 00:01:11.958 [Pipeline] { 00:01:11.973 [Pipeline] stage 00:01:11.975 [Pipeline] { (Tests) 00:01:11.987 [Pipeline] sh 00:01:12.275 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.275 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.275 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.275 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.275 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.275 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.275 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.275 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.275 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.275 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.275 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.275 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.275 + source /etc/os-release 00:01:12.275 ++ NAME='Fedora Linux' 00:01:12.275 ++ VERSION='39 (Cloud Edition)' 00:01:12.275 ++ ID=fedora 00:01:12.275 ++ VERSION_ID=39 00:01:12.275 ++ VERSION_CODENAME= 00:01:12.275 ++ PLATFORM_ID=platform:f39 00:01:12.275 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:12.275 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.275 ++ LOGO=fedora-logo-icon 00:01:12.275 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:12.275 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.275 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:12.275 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.275 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.275 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.275 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:12.275 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.275 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:12.275 ++ SUPPORT_END=2024-11-12 00:01:12.275 ++ VARIANT='Cloud Edition' 00:01:12.275 ++ VARIANT_ID=cloud 00:01:12.275 + uname -a 00:01:12.275 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:12.275 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.822 Hugepages 00:01:14.822 node hugesize free / total 00:01:14.822 node0 1048576kB 0 / 0 00:01:14.822 node0 2048kB 0 / 0 00:01:14.822 node1 1048576kB 0 / 0 00:01:14.822 node1 2048kB 0 / 0 00:01:14.822 00:01:14.822 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.822 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:14.822 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:14.822 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:14.822 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:14.822 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:14.822 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:14.822 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:14.822 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:15.084 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:15.084 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:15.084 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:15.084 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:15.084 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:15.084 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:15.084 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:15.084 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:15.084 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:15.084 + rm -f /tmp/spdk-ld-path 00:01:15.084 + source autorun-spdk.conf 00:01:15.084 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.084 ++ SPDK_TEST_NVMF=1 00:01:15.084 ++ SPDK_TEST_NVME_CLI=1 00:01:15.084 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.084 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.084 ++ SPDK_TEST_VFIOUSER=1 00:01:15.084 ++ SPDK_RUN_UBSAN=1 00:01:15.084 ++ NET_TYPE=phy 00:01:15.084 ++ RUN_NIGHTLY=0 00:01:15.084 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.084 + [[ -n '' ]] 00:01:15.084 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.084 + for M in /var/spdk/build-*-manifest.txt 00:01:15.084 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:15.084 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.084 + for M in /var/spdk/build-*-manifest.txt 00:01:15.084 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.084 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.084 + for M in /var/spdk/build-*-manifest.txt 00:01:15.084 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.084 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.084 ++ uname 00:01:15.084 + [[ Linux == \L\i\n\u\x ]] 00:01:15.084 + sudo dmesg -T 00:01:15.084 + sudo dmesg --clear 00:01:15.084 + dmesg_pid=2927165 00:01:15.084 + [[ Fedora Linux == FreeBSD ]] 00:01:15.084 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.084 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.084 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.084 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.084 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.084 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.084 + sudo dmesg -Tw 00:01:15.084 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.084 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.084 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.084 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.084 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.084 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.084 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.084 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.084 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.346 10:43:06 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:15.346 10:43:06 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:15.346 10:43:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:15.346 10:43:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:15.346 10:43:06 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.346 10:43:06 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:15.346 10:43:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.346 10:43:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:15.346 10:43:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.346 10:43:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.346 10:43:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.346 10:43:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.346 10:43:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.346 10:43:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.346 10:43:06 -- paths/export.sh@5 -- $ export PATH 00:01:15.346 10:43:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.346 10:43:06 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.346 10:43:06 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:15.346 10:43:06 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730886186.XXXXXX 00:01:15.346 10:43:06 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730886186.A9DnA0 00:01:15.346 10:43:06 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:15.347 10:43:06 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:15.347 10:43:06 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.347 10:43:06 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.347 10:43:06 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.347 10:43:06 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:15.347 10:43:06 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:15.347 10:43:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.347 10:43:06 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.347 10:43:06 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:15.347 10:43:06 -- pm/common@17 -- $ local monitor 00:01:15.347 10:43:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.347 10:43:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.347 10:43:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.347 10:43:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.347 10:43:06 -- pm/common@21 -- $ date +%s 00:01:15.347 10:43:06 -- pm/common@25 -- $ sleep 1 00:01:15.347 10:43:06 -- pm/common@21 -- $ date +%s 00:01:15.347 10:43:06 -- pm/common@21 -- $ date +%s 00:01:15.347 10:43:06 -- pm/common@21 -- $ date +%s 00:01:15.347 10:43:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730886186 00:01:15.347 10:43:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730886186 00:01:15.347 10:43:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730886186 00:01:15.347 10:43:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730886186 00:01:15.347 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730886186_collect-cpu-temp.pm.log 00:01:15.347 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730886186_collect-cpu-load.pm.log 00:01:15.347 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730886186_collect-vmstat.pm.log 00:01:15.347 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730886186_collect-bmc-pm.bmc.pm.log 00:01:16.290 10:43:07 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:16.290 10:43:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.290 10:43:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.290 10:43:07 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.290 10:43:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.290 Wed Nov 6 09:43:07 AM UTC 2024 00:01:16.290 10:43:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.551 v25.01-pre-163-gf0e4b91ff 00:01:16.551 10:43:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.551 10:43:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.551 10:43:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.551 10:43:07 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:16.551 10:43:07 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:16.551 10:43:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.551 ************************************ 00:01:16.551 START TEST ubsan 00:01:16.551 ************************************ 00:01:16.551 10:43:07 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:16.551 using ubsan 00:01:16.551 00:01:16.551 real 0m0.001s 00:01:16.551 user 0m0.001s 00:01:16.551 sys 0m0.000s 00:01:16.551 10:43:07 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:16.551 10:43:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.551 ************************************ 00:01:16.551 END TEST ubsan 00:01:16.551 ************************************ 00:01:16.551 10:43:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.551 10:43:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.551 10:43:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.551 10:43:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.551 10:43:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.551 10:43:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.551 10:43:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.551 10:43:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.551 10:43:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:16.551 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.551 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:17.123 Using 'verbs' RDMA provider 00:01:33.060 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:45.306 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:45.306 Creating mk/config.mk...done. 00:01:45.306 Creating mk/cc.flags.mk...done. 00:01:45.306 Type 'make' to build. 00:01:45.306 10:43:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:45.306 10:43:36 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:45.306 10:43:36 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:45.306 10:43:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.306 ************************************ 00:01:45.306 START TEST make 00:01:45.306 ************************************ 00:01:45.306 10:43:36 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:45.567 make[1]: Nothing to be done for 'all'. 00:01:46.957 The Meson build system 00:01:46.957 Version: 1.5.0 00:01:46.957 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:46.957 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.957 Build type: native build 00:01:46.957 Project name: libvfio-user 00:01:46.957 Project version: 0.0.1 00:01:46.957 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:46.957 C linker for the host machine: cc ld.bfd 2.40-14 00:01:46.957 Host machine cpu family: x86_64 00:01:46.957 Host machine cpu: x86_64 00:01:46.957 Run-time dependency threads found: YES 00:01:46.957 Library dl found: YES 00:01:46.957 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:46.957 Run-time dependency json-c found: YES 0.17 00:01:46.957 Run-time dependency cmocka found: YES 1.1.7 00:01:46.957 Program pytest-3 found: NO 00:01:46.957 Program flake8 found: NO 00:01:46.957 Program misspell-fixer found: NO 00:01:46.957 Program restructuredtext-lint found: NO 00:01:46.957 Program valgrind found: YES (/usr/bin/valgrind) 00:01:46.957 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.957 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.957 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.957 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.957 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:46.957 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:46.957 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.957 Build targets in project: 8 00:01:46.957 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:46.957 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:46.957 00:01:46.957 libvfio-user 0.0.1 00:01:46.957 00:01:46.957 User defined options 00:01:46.957 buildtype : debug 00:01:46.957 default_library: shared 00:01:46.957 libdir : /usr/local/lib 00:01:46.957 00:01:46.957 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.216 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.216 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:47.216 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:47.216 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:47.216 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:47.216 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:47.216 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:47.216 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:47.216 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:47.216 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:47.216 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:47.216 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:47.216 [12/37] Compiling C object samples/null.p/null.c.o 00:01:47.216 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:47.216 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:47.216 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:47.477 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:47.477 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:47.477 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:47.477 [19/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:47.477 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:47.477 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:47.477 [22/37] Compiling C object samples/server.p/server.c.o 00:01:47.477 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:47.477 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:47.477 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:47.477 [26/37] Compiling C object samples/client.p/client.c.o 00:01:47.477 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:47.477 [28/37] Linking target samples/client 00:01:47.477 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:47.477 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:47.477 [31/37] Linking target test/unit_tests 00:01:47.742 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:47.742 [33/37] Linking target samples/server 00:01:47.742 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:47.742 [35/37] Linking target samples/null 00:01:47.742 [36/37] Linking target samples/gpio-pci-idio-16 00:01:47.742 [37/37] Linking target samples/lspci 00:01:47.742 INFO: autodetecting backend as ninja 00:01:47.742 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.742 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:48.009 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:48.009 ninja: no work to do. 00:01:54.599 The Meson build system 00:01:54.599 Version: 1.5.0 00:01:54.599 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:54.599 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:54.599 Build type: native build 00:01:54.599 Program cat found: YES (/usr/bin/cat) 00:01:54.599 Project name: DPDK 00:01:54.599 Project version: 24.03.0 00:01:54.599 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.599 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.599 Host machine cpu family: x86_64 00:01:54.599 Host machine cpu: x86_64 00:01:54.599 Message: ## Building in Developer Mode ## 00:01:54.599 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.599 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.599 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.599 Program python3 found: YES (/usr/bin/python3) 00:01:54.599 Program cat found: YES (/usr/bin/cat) 00:01:54.599 Compiler for C supports arguments -march=native: YES 00:01:54.599 Checking for size of "void *" : 8 00:01:54.599 Checking for size of "void *" : 8 (cached) 00:01:54.599 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.599 Library m found: YES 00:01:54.599 Library numa found: YES 00:01:54.599 Has header "numaif.h" : YES 00:01:54.599 Library fdt found: NO 00:01:54.599 Library execinfo found: NO 00:01:54.599 Has header "execinfo.h" : YES 00:01:54.599 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.599 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.599 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.599 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.599 Run-time dependency openssl found: YES 3.1.1 00:01:54.599 Run-time dependency libpcap found: YES 1.10.4 00:01:54.599 Has header "pcap.h" with dependency libpcap: YES 00:01:54.599 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.599 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.599 Compiler for C supports arguments -Wformat: YES 00:01:54.600 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.600 Compiler for C supports arguments -Wformat-security: NO 00:01:54.600 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.600 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.600 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.600 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.600 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.600 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.600 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.600 Compiler for C supports arguments -Wundef: YES 00:01:54.600 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.600 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.600 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.600 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.600 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.600 Program objdump found: YES (/usr/bin/objdump) 00:01:54.600 Compiler for C supports arguments -mavx512f: YES 00:01:54.600 Checking if "AVX512 checking" compiles: YES 00:01:54.600 Fetching value of define "__SSE4_2__" : 1 00:01:54.600 Fetching value of define "__AES__" : 1 00:01:54.600 Fetching value of define "__AVX__" : 1 00:01:54.600 Fetching value of define "__AVX2__" : 1 00:01:54.600 Fetching value of define "__AVX512BW__" : 1 00:01:54.600 Fetching value of define "__AVX512CD__" : 1 00:01:54.600 Fetching value of define "__AVX512DQ__" : 1 00:01:54.600 Fetching value of define "__AVX512F__" : 1 00:01:54.600 Fetching value of define "__AVX512VL__" : 1 00:01:54.600 Fetching value of define "__PCLMUL__" : 1 00:01:54.600 Fetching value of define "__RDRND__" : 1 00:01:54.600 Fetching value of define "__RDSEED__" : 1 00:01:54.600 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:54.600 Fetching value of define "__znver1__" : (undefined) 00:01:54.600 Fetching value of define "__znver2__" : (undefined) 00:01:54.600 Fetching value of define "__znver3__" : (undefined) 00:01:54.600 Fetching value of define "__znver4__" : (undefined) 00:01:54.600 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.600 Message: lib/log: Defining dependency "log" 00:01:54.600 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.600 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.600 Checking for function "getentropy" : NO 00:01:54.600 Message: lib/eal: Defining dependency "eal" 00:01:54.600 Message: lib/ring: Defining dependency "ring" 00:01:54.600 Message: lib/rcu: Defining dependency "rcu" 00:01:54.600 Message: lib/mempool: Defining dependency "mempool" 00:01:54.600 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.600 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.600 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.600 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.600 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.600 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.600 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:54.600 Compiler for C supports arguments -mpclmul: YES 00:01:54.600 Compiler for C supports arguments -maes: YES 00:01:54.600 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.600 Compiler for C supports arguments -mavx512bw: YES 00:01:54.600 Compiler for C supports arguments -mavx512dq: YES 00:01:54.600 Compiler for C supports arguments -mavx512vl: YES 00:01:54.600 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.600 Compiler for C supports arguments -mavx2: YES 00:01:54.600 Compiler for C supports arguments -mavx: YES 00:01:54.600 Message: lib/net: Defining dependency "net" 00:01:54.600 Message: lib/meter: Defining dependency "meter" 00:01:54.600 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.600 Message: lib/pci: Defining dependency "pci" 00:01:54.600 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.600 Message: lib/hash: Defining dependency "hash" 00:01:54.600 Message: lib/timer: Defining dependency "timer" 00:01:54.600 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.600 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.600 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.600 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.600 Message: lib/power: Defining dependency "power" 00:01:54.600 Message: lib/reorder: Defining dependency "reorder" 00:01:54.600 Message: lib/security: Defining dependency "security" 00:01:54.600 Has header "linux/userfaultfd.h" : YES 00:01:54.600 Has header "linux/vduse.h" : YES 00:01:54.600 Message: lib/vhost: Defining dependency "vhost" 00:01:54.600 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.600 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.600 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.600 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.600 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.600 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.600 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.600 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.600 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.600 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.600 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.600 Configuring doxy-api-html.conf using configuration 00:01:54.600 Configuring doxy-api-man.conf using configuration 00:01:54.600 Program mandb found: YES (/usr/bin/mandb) 00:01:54.600 Program sphinx-build found: NO 00:01:54.600 Configuring rte_build_config.h using configuration 00:01:54.600 Message: 00:01:54.600 ================= 00:01:54.600 Applications Enabled 00:01:54.600 ================= 00:01:54.600 00:01:54.600 apps: 00:01:54.600 00:01:54.600 00:01:54.600 Message: 00:01:54.600 ================= 00:01:54.600 Libraries Enabled 00:01:54.600 ================= 00:01:54.600 00:01:54.600 libs: 00:01:54.600 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.600 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.600 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.600 00:01:54.600 Message: 00:01:54.600 =============== 00:01:54.600 Drivers Enabled 00:01:54.600 =============== 00:01:54.600 00:01:54.600 common: 00:01:54.600 00:01:54.600 bus: 00:01:54.600 pci, vdev, 00:01:54.600 mempool: 00:01:54.600 ring, 00:01:54.600 dma: 00:01:54.600 00:01:54.600 net: 00:01:54.600 00:01:54.600 crypto: 00:01:54.600 00:01:54.600 compress: 00:01:54.600 00:01:54.600 vdpa: 00:01:54.600 00:01:54.600 00:01:54.600 Message: 00:01:54.600 ================= 00:01:54.600 Content Skipped 00:01:54.600 ================= 00:01:54.600 00:01:54.600 apps: 00:01:54.600 dumpcap: explicitly disabled via build config 00:01:54.600 graph: explicitly disabled via build config 00:01:54.600 pdump: explicitly disabled via build config 00:01:54.600 proc-info: explicitly disabled via build config 00:01:54.600 test-acl: explicitly disabled via build config 00:01:54.600 test-bbdev: explicitly disabled via build config 00:01:54.600 test-cmdline: explicitly disabled via build config 00:01:54.600 test-compress-perf: explicitly disabled via build config 00:01:54.600 test-crypto-perf: explicitly disabled via build config 00:01:54.600 test-dma-perf: explicitly disabled via build config 00:01:54.600 test-eventdev: explicitly disabled via build config 00:01:54.600 test-fib: explicitly disabled via build config 00:01:54.600 test-flow-perf: explicitly disabled via build config 00:01:54.600 test-gpudev: explicitly disabled via build config 00:01:54.600 test-mldev: explicitly disabled via build config 00:01:54.600 test-pipeline: explicitly disabled via build config 00:01:54.600 test-pmd: explicitly disabled via build config 00:01:54.600 test-regex: explicitly disabled via build config 00:01:54.600 test-sad: explicitly disabled via build config 00:01:54.600 test-security-perf: explicitly disabled via build config 00:01:54.600 00:01:54.600 libs: 00:01:54.600 argparse: explicitly disabled via build config 00:01:54.600 metrics: explicitly disabled via build config 00:01:54.600 acl: explicitly disabled via build config 00:01:54.600 bbdev: explicitly disabled via build config 00:01:54.600 bitratestats: explicitly disabled via build config 00:01:54.600 bpf: explicitly disabled via build config 00:01:54.600 cfgfile: explicitly disabled via build config 00:01:54.600 distributor: explicitly disabled via build config 00:01:54.600 efd: explicitly disabled via build config 00:01:54.600 eventdev: explicitly disabled via build config 00:01:54.600 dispatcher: explicitly disabled via build config 00:01:54.600 gpudev: explicitly disabled via build config 00:01:54.600 gro: explicitly disabled via build config 00:01:54.600 gso: explicitly disabled via build config 00:01:54.600 ip_frag: explicitly disabled via build config 00:01:54.600 jobstats: explicitly disabled via build config 00:01:54.600 latencystats: explicitly disabled via build config 00:01:54.600 lpm: explicitly disabled via build config 00:01:54.600 member: explicitly disabled via build config 00:01:54.600 pcapng: explicitly disabled via build config 00:01:54.600 rawdev: explicitly disabled via build config 00:01:54.600 regexdev: explicitly disabled via build config 00:01:54.600 mldev: explicitly disabled via build config 00:01:54.601 rib: explicitly disabled via build config 00:01:54.601 sched: explicitly disabled via build config 00:01:54.601 stack: explicitly disabled via build config 00:01:54.601 ipsec: explicitly disabled via build config 00:01:54.601 pdcp: explicitly disabled via build config 00:01:54.601 fib: explicitly disabled via build config 00:01:54.601 port: explicitly disabled via build config 00:01:54.601 pdump: explicitly disabled via build config 00:01:54.601 table: explicitly disabled via build config 00:01:54.601 pipeline: explicitly disabled via build config 00:01:54.601 graph: explicitly disabled via build config 00:01:54.601 node: explicitly disabled via build config 00:01:54.601 00:01:54.601 drivers: 00:01:54.601 common/cpt: not in enabled drivers build config 00:01:54.601 common/dpaax: not in enabled drivers build config 00:01:54.601 common/iavf: not in enabled drivers build config 00:01:54.601 common/idpf: not in enabled drivers build config 00:01:54.601 common/ionic: not in enabled drivers build config 00:01:54.601 common/mvep: not in enabled drivers build config 00:01:54.601 common/octeontx: not in enabled drivers build config 00:01:54.601 bus/auxiliary: not in enabled drivers build config 00:01:54.601 bus/cdx: not in enabled drivers build config 00:01:54.601 bus/dpaa: not in enabled drivers build config 00:01:54.601 bus/fslmc: not in enabled drivers build config 00:01:54.601 bus/ifpga: not in enabled drivers build config 00:01:54.601 bus/platform: not in enabled drivers build config 00:01:54.601 bus/uacce: not in enabled drivers build config 00:01:54.601 bus/vmbus: not in enabled drivers build config 00:01:54.601 common/cnxk: not in enabled drivers build config 00:01:54.601 common/mlx5: not in enabled drivers build config 00:01:54.601 common/nfp: not in enabled drivers build config 00:01:54.601 common/nitrox: not in enabled drivers build config 00:01:54.601 common/qat: not in enabled drivers build config 00:01:54.601 common/sfc_efx: not in enabled drivers build config 00:01:54.601 mempool/bucket: not in enabled drivers build config 00:01:54.601 mempool/cnxk: not in enabled drivers build config 00:01:54.601 mempool/dpaa: not in enabled drivers build config 00:01:54.601 mempool/dpaa2: not in enabled drivers build config 00:01:54.601 mempool/octeontx: not in enabled drivers build config 00:01:54.601 mempool/stack: not in enabled drivers build config 00:01:54.601 dma/cnxk: not in enabled drivers build config 00:01:54.601 dma/dpaa: not in enabled drivers build config 00:01:54.601 dma/dpaa2: not in enabled drivers build config 00:01:54.601 dma/hisilicon: not in enabled drivers build config 00:01:54.601 dma/idxd: not in enabled drivers build config 00:01:54.601 dma/ioat: not in enabled drivers build config 00:01:54.601 dma/skeleton: not in enabled drivers build config 00:01:54.601 net/af_packet: not in enabled drivers build config 00:01:54.601 net/af_xdp: not in enabled drivers build config 00:01:54.601 net/ark: not in enabled drivers build config 00:01:54.601 net/atlantic: not in enabled drivers build config 00:01:54.601 net/avp: not in enabled drivers build config 00:01:54.601 net/axgbe: not in enabled drivers build config 00:01:54.601 net/bnx2x: not in enabled drivers build config 00:01:54.601 net/bnxt: not in enabled drivers build config 00:01:54.601 net/bonding: not in enabled drivers build config 00:01:54.601 net/cnxk: not in enabled drivers build config 00:01:54.601 net/cpfl: not in enabled drivers build config 00:01:54.601 net/cxgbe: not in enabled drivers build config 00:01:54.601 net/dpaa: not in enabled drivers build config 00:01:54.601 net/dpaa2: not in enabled drivers build config 00:01:54.601 net/e1000: not in enabled drivers build config 00:01:54.601 net/ena: not in enabled drivers build config 00:01:54.601 net/enetc: not in enabled drivers build config 00:01:54.601 net/enetfec: not in enabled drivers build config 00:01:54.601 net/enic: not in enabled drivers build config 00:01:54.601 net/failsafe: not in enabled drivers build config 00:01:54.601 net/fm10k: not in enabled drivers build config 00:01:54.601 net/gve: not in enabled drivers build config 00:01:54.601 net/hinic: not in enabled drivers build config 00:01:54.601 net/hns3: not in enabled drivers build config 00:01:54.601 net/i40e: not in enabled drivers build config 00:01:54.601 net/iavf: not in enabled drivers build config 00:01:54.601 net/ice: not in enabled drivers build config 00:01:54.601 net/idpf: not in enabled drivers build config 00:01:54.601 net/igc: not in enabled drivers build config 00:01:54.601 net/ionic: not in enabled drivers build config 00:01:54.601 net/ipn3ke: not in enabled drivers build config 00:01:54.601 net/ixgbe: not in enabled drivers build config 00:01:54.601 net/mana: not in enabled drivers build config 00:01:54.601 net/memif: not in enabled drivers build config 00:01:54.601 net/mlx4: not in enabled drivers build config 00:01:54.601 net/mlx5: not in enabled drivers build config 00:01:54.601 net/mvneta: not in enabled drivers build config 00:01:54.601 net/mvpp2: not in enabled drivers build config 00:01:54.601 net/netvsc: not in enabled drivers build config 00:01:54.601 net/nfb: not in enabled drivers build config 00:01:54.601 net/nfp: not in enabled drivers build config 00:01:54.601 net/ngbe: not in enabled drivers build config 00:01:54.601 net/null: not in enabled drivers build config 00:01:54.601 net/octeontx: not in enabled drivers build config 00:01:54.601 net/octeon_ep: not in enabled drivers build config 00:01:54.601 net/pcap: not in enabled drivers build config 00:01:54.601 net/pfe: not in enabled drivers build config 00:01:54.601 net/qede: not in enabled drivers build config 00:01:54.601 net/ring: not in enabled drivers build config 00:01:54.601 net/sfc: not in enabled drivers build config 00:01:54.601 net/softnic: not in enabled drivers build config 00:01:54.601 net/tap: not in enabled drivers build config 00:01:54.601 net/thunderx: not in enabled drivers build config 00:01:54.601 net/txgbe: not in enabled drivers build config 00:01:54.601 net/vdev_netvsc: not in enabled drivers build config 00:01:54.601 net/vhost: not in enabled drivers build config 00:01:54.601 net/virtio: not in enabled drivers build config 00:01:54.601 net/vmxnet3: not in enabled drivers build config 00:01:54.601 raw/*: missing internal dependency, "rawdev" 00:01:54.601 crypto/armv8: not in enabled drivers build config 00:01:54.601 crypto/bcmfs: not in enabled drivers build config 00:01:54.601 crypto/caam_jr: not in enabled drivers build config 00:01:54.601 crypto/ccp: not in enabled drivers build config 00:01:54.601 crypto/cnxk: not in enabled drivers build config 00:01:54.601 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.601 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.601 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.601 crypto/mlx5: not in enabled drivers build config 00:01:54.601 crypto/mvsam: not in enabled drivers build config 00:01:54.601 crypto/nitrox: not in enabled drivers build config 00:01:54.601 crypto/null: not in enabled drivers build config 00:01:54.601 crypto/octeontx: not in enabled drivers build config 00:01:54.601 crypto/openssl: not in enabled drivers build config 00:01:54.601 crypto/scheduler: not in enabled drivers build config 00:01:54.601 crypto/uadk: not in enabled drivers build config 00:01:54.601 crypto/virtio: not in enabled drivers build config 00:01:54.601 compress/isal: not in enabled drivers build config 00:01:54.601 compress/mlx5: not in enabled drivers build config 00:01:54.601 compress/nitrox: not in enabled drivers build config 00:01:54.601 compress/octeontx: not in enabled drivers build config 00:01:54.601 compress/zlib: not in enabled drivers build config 00:01:54.601 regex/*: missing internal dependency, "regexdev" 00:01:54.601 ml/*: missing internal dependency, "mldev" 00:01:54.601 vdpa/ifc: not in enabled drivers build config 00:01:54.601 vdpa/mlx5: not in enabled drivers build config 00:01:54.601 vdpa/nfp: not in enabled drivers build config 00:01:54.601 vdpa/sfc: not in enabled drivers build config 00:01:54.601 event/*: missing internal dependency, "eventdev" 00:01:54.601 baseband/*: missing internal dependency, "bbdev" 00:01:54.601 gpu/*: missing internal dependency, "gpudev" 00:01:54.601 00:01:54.601 00:01:54.601 Build targets in project: 84 00:01:54.601 00:01:54.601 DPDK 24.03.0 00:01:54.601 00:01:54.601 User defined options 00:01:54.601 buildtype : debug 00:01:54.601 default_library : shared 00:01:54.601 libdir : lib 00:01:54.601 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:54.601 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.601 c_link_args : 00:01:54.601 cpu_instruction_set: native 00:01:54.601 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:54.601 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:54.601 enable_docs : false 00:01:54.601 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:54.601 enable_kmods : false 00:01:54.601 max_lcores : 128 00:01:54.601 tests : false 00:01:54.601 00:01:54.601 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.601 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.867 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.867 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.867 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.867 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.867 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.867 [6/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.867 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.867 [8/267] Linking static target lib/librte_kvargs.a 00:01:54.867 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.867 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.867 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.867 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.867 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.867 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.867 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.867 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.867 [17/267] Linking static target lib/librte_log.a 00:01:54.867 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.867 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:54.867 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:54.867 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.867 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:54.867 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.127 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.127 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.127 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.127 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.127 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.127 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.127 [30/267] Linking static target lib/librte_pci.a 00:01:55.127 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.127 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.127 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.127 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.127 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.127 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.127 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.127 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.127 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.390 [40/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.390 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.390 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.390 [43/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.390 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.390 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.390 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.390 [47/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.390 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.390 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.390 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.390 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.390 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.390 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.390 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.390 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.390 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.390 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.390 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.390 [59/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.390 [60/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.390 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.390 [62/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.390 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.390 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.390 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.390 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.390 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.390 [68/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.390 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.390 [70/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.390 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.390 [72/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.390 [73/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.390 [74/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.390 [75/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.391 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.391 [77/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.391 [78/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.391 [79/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.391 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.391 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.391 [82/267] Linking static target lib/librte_meter.a 00:01:55.391 [83/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.391 [84/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.391 [85/267] Linking static target lib/librte_telemetry.a 00:01:55.391 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.391 [87/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.391 [88/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.391 [89/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.391 [90/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.391 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.391 [92/267] Linking static target lib/librte_cmdline.a 00:01:55.391 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.391 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.391 [95/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.391 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.391 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.391 [98/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.391 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.391 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.391 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.391 [102/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.391 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.391 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.391 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.391 [106/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.391 [107/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.391 [108/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.391 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.391 [110/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.391 [111/267] Linking static target lib/librte_ring.a 00:01:55.391 [112/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.391 [113/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:55.391 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.391 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.391 [116/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.391 [117/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.391 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.391 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.391 [120/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.391 [121/267] Linking static target lib/librte_timer.a 00:01:55.391 [122/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.391 [123/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.391 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.391 [125/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.391 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.391 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.391 [128/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.391 [129/267] Linking static target lib/librte_reorder.a 00:01:55.391 [130/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.391 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.391 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.391 [133/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.652 [134/267] Linking static target lib/librte_net.a 00:01:55.652 [135/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.652 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.652 [137/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.652 [138/267] Linking static target lib/librte_dmadev.a 00:01:55.652 [139/267] Linking static target lib/librte_rcu.a 00:01:55.652 [140/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.652 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.652 [142/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.652 [143/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.652 [144/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.652 [145/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.652 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.652 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.652 [148/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.652 [149/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.652 [150/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.652 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.652 [152/267] Linking static target lib/librte_mempool.a 00:01:55.652 [153/267] Linking static target lib/librte_power.a 00:01:55.652 [154/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.652 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.652 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.652 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.652 [158/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.652 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.652 [160/267] Linking static target lib/librte_compressdev.a 00:01:55.652 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.652 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.652 [163/267] Linking target lib/librte_log.so.24.1 00:01:55.652 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.652 [165/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.652 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.652 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.652 [168/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.652 [169/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.652 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.652 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.652 [172/267] Linking static target lib/librte_eal.a 00:01:55.652 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.652 [174/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.652 [175/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.652 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.652 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.652 [178/267] Linking static target lib/librte_security.a 00:01:55.652 [179/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.652 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.652 [181/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.652 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.652 [183/267] Linking static target lib/librte_mbuf.a 00:01:55.652 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.652 [185/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.652 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.913 [187/267] Linking target lib/librte_kvargs.so.24.1 00:01:55.913 [188/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.913 [189/267] Linking static target lib/librte_hash.a 00:01:55.913 [190/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.913 [191/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.913 [192/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.913 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.913 [194/267] Linking static target drivers/librte_bus_pci.a 00:01:55.913 [195/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.913 [196/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.913 [197/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.914 [198/267] Linking static target drivers/librte_bus_vdev.a 00:01:55.914 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.914 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.914 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.914 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.914 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.914 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:55.914 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.914 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.914 [207/267] Linking static target lib/librte_cryptodev.a 00:01:55.914 [208/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.914 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.914 [210/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.914 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.175 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:56.175 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.175 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.175 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.436 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.436 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.436 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.436 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.436 [220/267] Linking static target lib/librte_ethdev.a 00:01:56.436 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.436 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.696 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.696 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.696 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.957 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.528 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.528 [228/267] Linking static target lib/librte_vhost.a 00:01:58.099 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.574 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.162 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.103 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.103 [233/267] Linking target lib/librte_eal.so.24.1 00:02:07.364 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:07.364 [235/267] Linking target lib/librte_ring.so.24.1 00:02:07.364 [236/267] Linking target lib/librte_meter.so.24.1 00:02:07.364 [237/267] Linking target lib/librte_timer.so.24.1 00:02:07.364 [238/267] Linking target lib/librte_pci.so.24.1 00:02:07.364 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:07.364 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.364 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.364 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.364 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.364 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:07.624 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.624 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:07.624 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:07.624 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:07.624 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:07.624 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:07.624 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.624 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:07.885 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:07.885 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:07.885 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:07.885 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:07.885 [257/267] Linking target lib/librte_net.so.24.1 00:02:07.885 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:08.146 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:08.146 [260/267] Linking target lib/librte_security.so.24.1 00:02:08.146 [261/267] Linking target lib/librte_hash.so.24.1 00:02:08.146 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:08.146 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:08.146 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:08.146 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:08.406 [266/267] Linking target lib/librte_power.so.24.1 00:02:08.406 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:08.406 INFO: autodetecting backend as ninja 00:02:08.406 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:11.707 CC lib/log/log.o 00:02:11.707 CC lib/log/log_deprecated.o 00:02:11.707 CC lib/log/log_flags.o 00:02:11.707 CC lib/ut/ut.o 00:02:11.707 CC lib/ut_mock/mock.o 00:02:11.707 LIB libspdk_log.a 00:02:11.707 LIB libspdk_ut.a 00:02:11.707 LIB libspdk_ut_mock.a 00:02:11.707 SO libspdk_log.so.7.1 00:02:11.707 SO libspdk_ut.so.2.0 00:02:11.969 SO libspdk_ut_mock.so.6.0 00:02:11.969 SYMLINK libspdk_ut.so 00:02:11.969 SYMLINK libspdk_log.so 00:02:11.969 SYMLINK libspdk_ut_mock.so 00:02:12.231 CC lib/ioat/ioat.o 00:02:12.231 CC lib/util/base64.o 00:02:12.231 CC lib/util/bit_array.o 00:02:12.231 CC lib/util/cpuset.o 00:02:12.231 CC lib/util/crc32.o 00:02:12.231 CC lib/util/crc16.o 00:02:12.231 CC lib/util/crc32c.o 00:02:12.231 CC lib/dma/dma.o 00:02:12.231 CC lib/util/crc32_ieee.o 00:02:12.231 CC lib/util/crc64.o 00:02:12.231 CXX lib/trace_parser/trace.o 00:02:12.231 CC lib/util/dif.o 00:02:12.231 CC lib/util/fd.o 00:02:12.231 CC lib/util/fd_group.o 00:02:12.231 CC lib/util/file.o 00:02:12.231 CC lib/util/hexlify.o 00:02:12.231 CC lib/util/iov.o 00:02:12.231 CC lib/util/math.o 00:02:12.231 CC lib/util/net.o 00:02:12.231 CC lib/util/pipe.o 00:02:12.231 CC lib/util/strerror_tls.o 00:02:12.231 CC lib/util/string.o 00:02:12.231 CC lib/util/uuid.o 00:02:12.231 CC lib/util/xor.o 00:02:12.231 CC lib/util/zipf.o 00:02:12.231 CC lib/util/md5.o 00:02:12.492 CC lib/vfio_user/host/vfio_user.o 00:02:12.492 CC lib/vfio_user/host/vfio_user_pci.o 00:02:12.492 LIB libspdk_dma.a 00:02:12.492 LIB libspdk_ioat.a 00:02:12.492 SO libspdk_dma.so.5.0 00:02:12.492 SO libspdk_ioat.so.7.0 00:02:12.492 SYMLINK libspdk_dma.so 00:02:12.492 SYMLINK libspdk_ioat.so 00:02:12.753 LIB libspdk_vfio_user.a 00:02:12.753 SO libspdk_vfio_user.so.5.0 00:02:12.753 LIB libspdk_util.a 00:02:12.753 SYMLINK libspdk_vfio_user.so 00:02:12.753 SO libspdk_util.so.10.1 00:02:13.014 SYMLINK libspdk_util.so 00:02:13.014 LIB libspdk_trace_parser.a 00:02:13.014 SO libspdk_trace_parser.so.6.0 00:02:13.274 SYMLINK libspdk_trace_parser.so 00:02:13.274 CC lib/vmd/vmd.o 00:02:13.274 CC lib/vmd/led.o 00:02:13.274 CC lib/rdma_utils/rdma_utils.o 00:02:13.274 CC lib/env_dpdk/env.o 00:02:13.274 CC lib/env_dpdk/memory.o 00:02:13.274 CC lib/env_dpdk/pci.o 00:02:13.274 CC lib/json/json_parse.o 00:02:13.274 CC lib/env_dpdk/init.o 00:02:13.274 CC lib/json/json_util.o 00:02:13.274 CC lib/env_dpdk/pci_virtio.o 00:02:13.274 CC lib/env_dpdk/threads.o 00:02:13.274 CC lib/json/json_write.o 00:02:13.274 CC lib/env_dpdk/pci_ioat.o 00:02:13.274 CC lib/env_dpdk/pci_vmd.o 00:02:13.274 CC lib/idxd/idxd.o 00:02:13.274 CC lib/env_dpdk/pci_idxd.o 00:02:13.274 CC lib/idxd/idxd_user.o 00:02:13.274 CC lib/env_dpdk/pci_event.o 00:02:13.274 CC lib/idxd/idxd_kernel.o 00:02:13.274 CC lib/conf/conf.o 00:02:13.274 CC lib/env_dpdk/sigbus_handler.o 00:02:13.274 CC lib/env_dpdk/pci_dpdk.o 00:02:13.274 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:13.274 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:13.534 LIB libspdk_conf.a 00:02:13.534 LIB libspdk_rdma_utils.a 00:02:13.534 SO libspdk_conf.so.6.0 00:02:13.534 LIB libspdk_json.a 00:02:13.534 SO libspdk_rdma_utils.so.1.0 00:02:13.793 SO libspdk_json.so.6.0 00:02:13.793 SYMLINK libspdk_conf.so 00:02:13.793 SYMLINK libspdk_rdma_utils.so 00:02:13.793 SYMLINK libspdk_json.so 00:02:13.794 LIB libspdk_idxd.a 00:02:13.794 SO libspdk_idxd.so.12.1 00:02:13.794 SYMLINK libspdk_idxd.so 00:02:13.794 LIB libspdk_vmd.a 00:02:13.794 SO libspdk_vmd.so.6.0 00:02:14.053 SYMLINK libspdk_vmd.so 00:02:14.053 CC lib/rdma_provider/common.o 00:02:14.053 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:14.053 CC lib/jsonrpc/jsonrpc_server.o 00:02:14.053 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:14.053 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:14.053 CC lib/jsonrpc/jsonrpc_client.o 00:02:14.313 LIB libspdk_rdma_provider.a 00:02:14.313 SO libspdk_rdma_provider.so.7.0 00:02:14.313 LIB libspdk_jsonrpc.a 00:02:14.313 SO libspdk_jsonrpc.so.6.0 00:02:14.313 SYMLINK libspdk_rdma_provider.so 00:02:14.313 SYMLINK libspdk_jsonrpc.so 00:02:14.574 LIB libspdk_env_dpdk.a 00:02:14.574 SO libspdk_env_dpdk.so.15.1 00:02:14.835 SYMLINK libspdk_env_dpdk.so 00:02:14.835 CC lib/rpc/rpc.o 00:02:15.095 LIB libspdk_rpc.a 00:02:15.095 SO libspdk_rpc.so.6.0 00:02:15.095 SYMLINK libspdk_rpc.so 00:02:15.356 CC lib/notify/notify.o 00:02:15.356 CC lib/notify/notify_rpc.o 00:02:15.356 CC lib/trace/trace.o 00:02:15.356 CC lib/trace/trace_flags.o 00:02:15.356 CC lib/trace/trace_rpc.o 00:02:15.356 CC lib/keyring/keyring.o 00:02:15.356 CC lib/keyring/keyring_rpc.o 00:02:15.618 LIB libspdk_notify.a 00:02:15.618 SO libspdk_notify.so.6.0 00:02:15.618 LIB libspdk_keyring.a 00:02:15.618 LIB libspdk_trace.a 00:02:15.618 SYMLINK libspdk_notify.so 00:02:15.880 SO libspdk_keyring.so.2.0 00:02:15.880 SO libspdk_trace.so.11.0 00:02:15.880 SYMLINK libspdk_keyring.so 00:02:15.880 SYMLINK libspdk_trace.so 00:02:16.141 CC lib/thread/iobuf.o 00:02:16.141 CC lib/thread/thread.o 00:02:16.141 CC lib/sock/sock.o 00:02:16.141 CC lib/sock/sock_rpc.o 00:02:16.713 LIB libspdk_sock.a 00:02:16.713 SO libspdk_sock.so.10.0 00:02:16.713 SYMLINK libspdk_sock.so 00:02:16.974 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:16.974 CC lib/nvme/nvme_ctrlr.o 00:02:16.974 CC lib/nvme/nvme_fabric.o 00:02:16.974 CC lib/nvme/nvme_ns_cmd.o 00:02:16.974 CC lib/nvme/nvme_ns.o 00:02:16.974 CC lib/nvme/nvme_pcie_common.o 00:02:16.974 CC lib/nvme/nvme_pcie.o 00:02:16.974 CC lib/nvme/nvme_qpair.o 00:02:16.974 CC lib/nvme/nvme.o 00:02:16.974 CC lib/nvme/nvme_quirks.o 00:02:16.974 CC lib/nvme/nvme_transport.o 00:02:16.974 CC lib/nvme/nvme_discovery.o 00:02:16.974 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:16.974 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:16.974 CC lib/nvme/nvme_tcp.o 00:02:16.974 CC lib/nvme/nvme_opal.o 00:02:16.974 CC lib/nvme/nvme_io_msg.o 00:02:16.974 CC lib/nvme/nvme_poll_group.o 00:02:16.974 CC lib/nvme/nvme_zns.o 00:02:16.974 CC lib/nvme/nvme_stubs.o 00:02:16.974 CC lib/nvme/nvme_rdma.o 00:02:16.974 CC lib/nvme/nvme_auth.o 00:02:16.974 CC lib/nvme/nvme_cuse.o 00:02:16.974 CC lib/nvme/nvme_vfio_user.o 00:02:17.548 LIB libspdk_thread.a 00:02:17.548 SO libspdk_thread.so.11.0 00:02:17.548 SYMLINK libspdk_thread.so 00:02:17.810 CC lib/virtio/virtio.o 00:02:17.810 CC lib/virtio/virtio_vhost_user.o 00:02:17.810 CC lib/virtio/virtio_vfio_user.o 00:02:17.810 CC lib/virtio/virtio_pci.o 00:02:17.810 CC lib/blob/blobstore.o 00:02:17.810 CC lib/blob/request.o 00:02:17.810 CC lib/blob/zeroes.o 00:02:17.810 CC lib/blob/blob_bs_dev.o 00:02:17.810 CC lib/fsdev/fsdev.o 00:02:17.810 CC lib/fsdev/fsdev_io.o 00:02:17.810 CC lib/fsdev/fsdev_rpc.o 00:02:18.071 CC lib/init/subsystem.o 00:02:18.071 CC lib/vfu_tgt/tgt_endpoint.o 00:02:18.071 CC lib/init/json_config.o 00:02:18.071 CC lib/vfu_tgt/tgt_rpc.o 00:02:18.071 CC lib/init/rpc.o 00:02:18.071 CC lib/init/subsystem_rpc.o 00:02:18.071 CC lib/accel/accel.o 00:02:18.071 CC lib/accel/accel_rpc.o 00:02:18.071 CC lib/accel/accel_sw.o 00:02:18.071 LIB libspdk_init.a 00:02:18.332 SO libspdk_init.so.6.0 00:02:18.332 LIB libspdk_virtio.a 00:02:18.332 LIB libspdk_vfu_tgt.a 00:02:18.332 SO libspdk_virtio.so.7.0 00:02:18.332 SO libspdk_vfu_tgt.so.3.0 00:02:18.332 SYMLINK libspdk_init.so 00:02:18.332 SYMLINK libspdk_virtio.so 00:02:18.332 SYMLINK libspdk_vfu_tgt.so 00:02:18.593 LIB libspdk_fsdev.a 00:02:18.593 SO libspdk_fsdev.so.2.0 00:02:18.593 CC lib/event/log_rpc.o 00:02:18.593 SYMLINK libspdk_fsdev.so 00:02:18.593 CC lib/event/app.o 00:02:18.593 CC lib/event/reactor.o 00:02:18.593 CC lib/event/scheduler_static.o 00:02:18.593 CC lib/event/app_rpc.o 00:02:18.854 LIB libspdk_nvme.a 00:02:18.854 LIB libspdk_accel.a 00:02:18.854 SO libspdk_accel.so.16.0 00:02:18.854 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:19.116 SO libspdk_nvme.so.15.0 00:02:19.116 SYMLINK libspdk_accel.so 00:02:19.116 LIB libspdk_event.a 00:02:19.116 SO libspdk_event.so.14.0 00:02:19.116 SYMLINK libspdk_event.so 00:02:19.116 SYMLINK libspdk_nvme.so 00:02:19.377 CC lib/bdev/bdev.o 00:02:19.377 CC lib/bdev/bdev_rpc.o 00:02:19.377 CC lib/bdev/bdev_zone.o 00:02:19.377 CC lib/bdev/part.o 00:02:19.377 CC lib/bdev/scsi_nvme.o 00:02:19.377 LIB libspdk_fuse_dispatcher.a 00:02:19.377 SO libspdk_fuse_dispatcher.so.1.0 00:02:19.638 SYMLINK libspdk_fuse_dispatcher.so 00:02:20.583 LIB libspdk_blob.a 00:02:20.583 SO libspdk_blob.so.11.0 00:02:20.583 SYMLINK libspdk_blob.so 00:02:20.845 CC lib/blobfs/blobfs.o 00:02:20.845 CC lib/blobfs/tree.o 00:02:20.845 CC lib/lvol/lvol.o 00:02:21.791 LIB libspdk_bdev.a 00:02:21.791 LIB libspdk_blobfs.a 00:02:21.791 SO libspdk_blobfs.so.10.0 00:02:21.791 SO libspdk_bdev.so.17.0 00:02:21.791 LIB libspdk_lvol.a 00:02:21.791 SYMLINK libspdk_blobfs.so 00:02:21.791 SO libspdk_lvol.so.10.0 00:02:21.791 SYMLINK libspdk_bdev.so 00:02:21.791 SYMLINK libspdk_lvol.so 00:02:22.052 CC lib/ublk/ublk.o 00:02:22.052 CC lib/ublk/ublk_rpc.o 00:02:22.052 CC lib/scsi/dev.o 00:02:22.052 CC lib/scsi/lun.o 00:02:22.052 CC lib/scsi/port.o 00:02:22.052 CC lib/scsi/scsi.o 00:02:22.052 CC lib/scsi/scsi_pr.o 00:02:22.052 CC lib/scsi/scsi_bdev.o 00:02:22.052 CC lib/nbd/nbd.o 00:02:22.052 CC lib/scsi/scsi_rpc.o 00:02:22.052 CC lib/scsi/task.o 00:02:22.052 CC lib/nbd/nbd_rpc.o 00:02:22.052 CC lib/nvmf/ctrlr.o 00:02:22.052 CC lib/nvmf/ctrlr_bdev.o 00:02:22.052 CC lib/nvmf/ctrlr_discovery.o 00:02:22.052 CC lib/ftl/ftl_core.o 00:02:22.052 CC lib/ftl/ftl_debug.o 00:02:22.052 CC lib/ftl/ftl_init.o 00:02:22.052 CC lib/nvmf/subsystem.o 00:02:22.052 CC lib/ftl/ftl_layout.o 00:02:22.052 CC lib/nvmf/nvmf.o 00:02:22.052 CC lib/nvmf/nvmf_rpc.o 00:02:22.052 CC lib/ftl/ftl_io.o 00:02:22.052 CC lib/nvmf/transport.o 00:02:22.311 CC lib/ftl/ftl_sb.o 00:02:22.311 CC lib/nvmf/tcp.o 00:02:22.311 CC lib/ftl/ftl_l2p.o 00:02:22.311 CC lib/nvmf/stubs.o 00:02:22.311 CC lib/ftl/ftl_l2p_flat.o 00:02:22.311 CC lib/ftl/ftl_nv_cache.o 00:02:22.311 CC lib/nvmf/mdns_server.o 00:02:22.311 CC lib/nvmf/vfio_user.o 00:02:22.311 CC lib/ftl/ftl_band.o 00:02:22.311 CC lib/nvmf/rdma.o 00:02:22.311 CC lib/ftl/ftl_band_ops.o 00:02:22.311 CC lib/nvmf/auth.o 00:02:22.311 CC lib/ftl/ftl_writer.o 00:02:22.311 CC lib/ftl/ftl_rq.o 00:02:22.311 CC lib/ftl/ftl_reloc.o 00:02:22.311 CC lib/ftl/ftl_l2p_cache.o 00:02:22.311 CC lib/ftl/ftl_p2l.o 00:02:22.311 CC lib/ftl/ftl_p2l_log.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:22.311 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:22.311 CC lib/ftl/utils/ftl_conf.o 00:02:22.311 CC lib/ftl/utils/ftl_md.o 00:02:22.311 CC lib/ftl/utils/ftl_mempool.o 00:02:22.311 CC lib/ftl/utils/ftl_bitmap.o 00:02:22.311 CC lib/ftl/utils/ftl_property.o 00:02:22.311 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:22.311 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:22.312 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:22.312 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:22.312 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:22.312 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:22.312 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:22.312 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:22.312 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:22.312 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:22.312 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:22.312 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:22.312 CC lib/ftl/ftl_trace.o 00:02:22.312 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:22.312 CC lib/ftl/base/ftl_base_bdev.o 00:02:22.312 CC lib/ftl/base/ftl_base_dev.o 00:02:22.570 LIB libspdk_nbd.a 00:02:22.570 SO libspdk_nbd.so.7.0 00:02:22.831 LIB libspdk_scsi.a 00:02:22.831 SYMLINK libspdk_nbd.so 00:02:22.831 SO libspdk_scsi.so.9.0 00:02:22.831 LIB libspdk_ublk.a 00:02:22.831 SYMLINK libspdk_scsi.so 00:02:22.831 SO libspdk_ublk.so.3.0 00:02:22.831 SYMLINK libspdk_ublk.so 00:02:23.093 LIB libspdk_ftl.a 00:02:23.093 CC lib/iscsi/conn.o 00:02:23.093 CC lib/vhost/vhost.o 00:02:23.093 CC lib/iscsi/init_grp.o 00:02:23.093 CC lib/vhost/vhost_scsi.o 00:02:23.093 CC lib/vhost/vhost_rpc.o 00:02:23.093 CC lib/iscsi/iscsi.o 00:02:23.093 CC lib/vhost/rte_vhost_user.o 00:02:23.093 CC lib/iscsi/param.o 00:02:23.093 CC lib/vhost/vhost_blk.o 00:02:23.093 CC lib/iscsi/portal_grp.o 00:02:23.093 CC lib/iscsi/tgt_node.o 00:02:23.093 CC lib/iscsi/iscsi_rpc.o 00:02:23.093 CC lib/iscsi/iscsi_subsystem.o 00:02:23.093 CC lib/iscsi/task.o 00:02:23.354 SO libspdk_ftl.so.9.0 00:02:23.615 SYMLINK libspdk_ftl.so 00:02:24.188 LIB libspdk_nvmf.a 00:02:24.188 LIB libspdk_vhost.a 00:02:24.188 SO libspdk_nvmf.so.20.0 00:02:24.188 SO libspdk_vhost.so.8.0 00:02:24.449 SYMLINK libspdk_vhost.so 00:02:24.449 SYMLINK libspdk_nvmf.so 00:02:24.449 LIB libspdk_iscsi.a 00:02:24.449 SO libspdk_iscsi.so.8.0 00:02:24.710 SYMLINK libspdk_iscsi.so 00:02:25.283 CC module/env_dpdk/env_dpdk_rpc.o 00:02:25.283 CC module/vfu_device/vfu_virtio.o 00:02:25.283 CC module/vfu_device/vfu_virtio_blk.o 00:02:25.283 CC module/vfu_device/vfu_virtio_scsi.o 00:02:25.283 CC module/vfu_device/vfu_virtio_rpc.o 00:02:25.283 CC module/vfu_device/vfu_virtio_fs.o 00:02:25.283 CC module/accel/error/accel_error.o 00:02:25.283 CC module/accel/error/accel_error_rpc.o 00:02:25.283 LIB libspdk_env_dpdk_rpc.a 00:02:25.283 CC module/scheduler/gscheduler/gscheduler.o 00:02:25.283 CC module/sock/posix/posix.o 00:02:25.283 CC module/fsdev/aio/fsdev_aio.o 00:02:25.283 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:25.283 CC module/fsdev/aio/linux_aio_mgr.o 00:02:25.283 CC module/blob/bdev/blob_bdev.o 00:02:25.283 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:25.283 CC module/keyring/file/keyring.o 00:02:25.283 CC module/accel/iaa/accel_iaa.o 00:02:25.283 CC module/keyring/file/keyring_rpc.o 00:02:25.283 CC module/keyring/linux/keyring.o 00:02:25.283 CC module/accel/iaa/accel_iaa_rpc.o 00:02:25.283 CC module/keyring/linux/keyring_rpc.o 00:02:25.283 CC module/accel/ioat/accel_ioat.o 00:02:25.283 CC module/accel/ioat/accel_ioat_rpc.o 00:02:25.283 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:25.283 CC module/accel/dsa/accel_dsa.o 00:02:25.283 CC module/accel/dsa/accel_dsa_rpc.o 00:02:25.283 SO libspdk_env_dpdk_rpc.so.6.0 00:02:25.545 SYMLINK libspdk_env_dpdk_rpc.so 00:02:25.545 LIB libspdk_accel_error.a 00:02:25.545 LIB libspdk_scheduler_dpdk_governor.a 00:02:25.545 LIB libspdk_keyring_linux.a 00:02:25.545 LIB libspdk_scheduler_gscheduler.a 00:02:25.545 LIB libspdk_keyring_file.a 00:02:25.545 SO libspdk_accel_error.so.2.0 00:02:25.545 SO libspdk_keyring_linux.so.1.0 00:02:25.545 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:25.545 SO libspdk_scheduler_gscheduler.so.4.0 00:02:25.545 SO libspdk_keyring_file.so.2.0 00:02:25.545 LIB libspdk_scheduler_dynamic.a 00:02:25.545 LIB libspdk_accel_iaa.a 00:02:25.545 LIB libspdk_accel_ioat.a 00:02:25.545 SO libspdk_scheduler_dynamic.so.4.0 00:02:25.545 LIB libspdk_blob_bdev.a 00:02:25.545 SO libspdk_accel_iaa.so.3.0 00:02:25.545 SO libspdk_accel_ioat.so.6.0 00:02:25.545 SYMLINK libspdk_keyring_linux.so 00:02:25.545 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:25.545 SYMLINK libspdk_accel_error.so 00:02:25.545 SYMLINK libspdk_scheduler_gscheduler.so 00:02:25.545 SYMLINK libspdk_keyring_file.so 00:02:25.545 SO libspdk_blob_bdev.so.11.0 00:02:25.545 LIB libspdk_accel_dsa.a 00:02:25.805 SYMLINK libspdk_scheduler_dynamic.so 00:02:25.805 SYMLINK libspdk_accel_ioat.so 00:02:25.805 SYMLINK libspdk_accel_iaa.so 00:02:25.805 SO libspdk_accel_dsa.so.5.0 00:02:25.805 SYMLINK libspdk_blob_bdev.so 00:02:25.805 LIB libspdk_vfu_device.a 00:02:25.805 SYMLINK libspdk_accel_dsa.so 00:02:25.805 SO libspdk_vfu_device.so.3.0 00:02:25.805 SYMLINK libspdk_vfu_device.so 00:02:25.805 LIB libspdk_fsdev_aio.a 00:02:26.066 SO libspdk_fsdev_aio.so.1.0 00:02:26.066 LIB libspdk_sock_posix.a 00:02:26.066 SO libspdk_sock_posix.so.6.0 00:02:26.066 SYMLINK libspdk_fsdev_aio.so 00:02:26.066 SYMLINK libspdk_sock_posix.so 00:02:26.327 CC module/blobfs/bdev/blobfs_bdev.o 00:02:26.327 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:26.327 CC module/bdev/delay/vbdev_delay.o 00:02:26.327 CC module/bdev/raid/bdev_raid.o 00:02:26.327 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:26.327 CC module/bdev/raid/bdev_raid_rpc.o 00:02:26.327 CC module/bdev/raid/bdev_raid_sb.o 00:02:26.327 CC module/bdev/raid/raid0.o 00:02:26.327 CC module/bdev/gpt/gpt.o 00:02:26.327 CC module/bdev/raid/raid1.o 00:02:26.327 CC module/bdev/gpt/vbdev_gpt.o 00:02:26.327 CC module/bdev/raid/concat.o 00:02:26.327 CC module/bdev/error/vbdev_error.o 00:02:26.327 CC module/bdev/passthru/vbdev_passthru.o 00:02:26.327 CC module/bdev/error/vbdev_error_rpc.o 00:02:26.327 CC module/bdev/split/vbdev_split.o 00:02:26.327 CC module/bdev/split/vbdev_split_rpc.o 00:02:26.327 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:26.327 CC module/bdev/iscsi/bdev_iscsi.o 00:02:26.327 CC module/bdev/aio/bdev_aio.o 00:02:26.327 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:26.327 CC module/bdev/aio/bdev_aio_rpc.o 00:02:26.327 CC module/bdev/null/bdev_null.o 00:02:26.327 CC module/bdev/nvme/bdev_nvme.o 00:02:26.327 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:26.327 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:26.327 CC module/bdev/null/bdev_null_rpc.o 00:02:26.327 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:26.327 CC module/bdev/ftl/bdev_ftl.o 00:02:26.327 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:26.327 CC module/bdev/lvol/vbdev_lvol.o 00:02:26.327 CC module/bdev/nvme/vbdev_opal.o 00:02:26.327 CC module/bdev/nvme/nvme_rpc.o 00:02:26.327 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:26.327 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:26.327 CC module/bdev/nvme/bdev_mdns_client.o 00:02:26.327 CC module/bdev/malloc/bdev_malloc.o 00:02:26.327 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:26.327 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:26.327 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:26.327 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:26.327 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:26.327 LIB libspdk_blobfs_bdev.a 00:02:26.589 SO libspdk_blobfs_bdev.so.6.0 00:02:26.589 SYMLINK libspdk_blobfs_bdev.so 00:02:26.589 LIB libspdk_bdev_gpt.a 00:02:26.589 LIB libspdk_bdev_split.a 00:02:26.589 SO libspdk_bdev_gpt.so.6.0 00:02:26.589 LIB libspdk_bdev_null.a 00:02:26.589 SO libspdk_bdev_split.so.6.0 00:02:26.589 LIB libspdk_bdev_error.a 00:02:26.589 LIB libspdk_bdev_ftl.a 00:02:26.589 SO libspdk_bdev_null.so.6.0 00:02:26.589 LIB libspdk_bdev_passthru.a 00:02:26.589 SO libspdk_bdev_error.so.6.0 00:02:26.589 SYMLINK libspdk_bdev_gpt.so 00:02:26.589 LIB libspdk_bdev_aio.a 00:02:26.589 SYMLINK libspdk_bdev_split.so 00:02:26.589 SO libspdk_bdev_ftl.so.6.0 00:02:26.589 LIB libspdk_bdev_zone_block.a 00:02:26.589 LIB libspdk_bdev_iscsi.a 00:02:26.589 SO libspdk_bdev_passthru.so.6.0 00:02:26.589 LIB libspdk_bdev_malloc.a 00:02:26.589 LIB libspdk_bdev_delay.a 00:02:26.589 SO libspdk_bdev_aio.so.6.0 00:02:26.589 SYMLINK libspdk_bdev_null.so 00:02:26.589 SO libspdk_bdev_iscsi.so.6.0 00:02:26.589 SO libspdk_bdev_zone_block.so.6.0 00:02:26.589 SYMLINK libspdk_bdev_error.so 00:02:26.589 SO libspdk_bdev_malloc.so.6.0 00:02:26.589 SO libspdk_bdev_delay.so.6.0 00:02:26.589 SYMLINK libspdk_bdev_ftl.so 00:02:26.851 SYMLINK libspdk_bdev_passthru.so 00:02:26.851 SYMLINK libspdk_bdev_aio.so 00:02:26.851 SYMLINK libspdk_bdev_zone_block.so 00:02:26.851 SYMLINK libspdk_bdev_iscsi.so 00:02:26.851 SYMLINK libspdk_bdev_delay.so 00:02:26.851 SYMLINK libspdk_bdev_malloc.so 00:02:26.851 LIB libspdk_bdev_virtio.a 00:02:26.851 LIB libspdk_bdev_lvol.a 00:02:26.851 SO libspdk_bdev_virtio.so.6.0 00:02:26.851 SO libspdk_bdev_lvol.so.6.0 00:02:26.851 SYMLINK libspdk_bdev_lvol.so 00:02:26.851 SYMLINK libspdk_bdev_virtio.so 00:02:27.112 LIB libspdk_bdev_raid.a 00:02:27.112 SO libspdk_bdev_raid.so.6.0 00:02:27.373 SYMLINK libspdk_bdev_raid.so 00:02:28.315 LIB libspdk_bdev_nvme.a 00:02:28.576 SO libspdk_bdev_nvme.so.7.1 00:02:28.576 SYMLINK libspdk_bdev_nvme.so 00:02:29.517 CC module/event/subsystems/keyring/keyring.o 00:02:29.517 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:29.517 CC module/event/subsystems/vmd/vmd.o 00:02:29.517 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.517 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.517 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.517 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.517 CC module/event/subsystems/sock/sock.o 00:02:29.517 CC module/event/subsystems/fsdev/fsdev.o 00:02:29.517 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.517 LIB libspdk_event_vfu_tgt.a 00:02:29.517 LIB libspdk_event_keyring.a 00:02:29.517 LIB libspdk_event_sock.a 00:02:29.517 LIB libspdk_event_vhost_blk.a 00:02:29.517 LIB libspdk_event_vmd.a 00:02:29.517 LIB libspdk_event_scheduler.a 00:02:29.517 LIB libspdk_event_fsdev.a 00:02:29.517 SO libspdk_event_vhost_blk.so.3.0 00:02:29.517 SO libspdk_event_sock.so.5.0 00:02:29.517 LIB libspdk_event_iobuf.a 00:02:29.517 SO libspdk_event_keyring.so.1.0 00:02:29.517 SO libspdk_event_vfu_tgt.so.3.0 00:02:29.517 SO libspdk_event_vmd.so.6.0 00:02:29.517 SO libspdk_event_scheduler.so.4.0 00:02:29.517 SO libspdk_event_fsdev.so.1.0 00:02:29.517 SO libspdk_event_iobuf.so.3.0 00:02:29.517 SYMLINK libspdk_event_vhost_blk.so 00:02:29.518 SYMLINK libspdk_event_keyring.so 00:02:29.518 SYMLINK libspdk_event_sock.so 00:02:29.518 SYMLINK libspdk_event_vfu_tgt.so 00:02:29.518 SYMLINK libspdk_event_vmd.so 00:02:29.518 SYMLINK libspdk_event_scheduler.so 00:02:29.518 SYMLINK libspdk_event_fsdev.so 00:02:29.518 SYMLINK libspdk_event_iobuf.so 00:02:30.088 CC module/event/subsystems/accel/accel.o 00:02:30.088 LIB libspdk_event_accel.a 00:02:30.088 SO libspdk_event_accel.so.6.0 00:02:30.088 SYMLINK libspdk_event_accel.so 00:02:30.659 CC module/event/subsystems/bdev/bdev.o 00:02:30.659 LIB libspdk_event_bdev.a 00:02:30.659 SO libspdk_event_bdev.so.6.0 00:02:30.659 SYMLINK libspdk_event_bdev.so 00:02:31.231 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.231 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.231 CC module/event/subsystems/ublk/ublk.o 00:02:31.231 CC module/event/subsystems/scsi/scsi.o 00:02:31.231 CC module/event/subsystems/nbd/nbd.o 00:02:31.231 LIB libspdk_event_ublk.a 00:02:31.231 LIB libspdk_event_nbd.a 00:02:31.231 LIB libspdk_event_scsi.a 00:02:31.231 SO libspdk_event_ublk.so.3.0 00:02:31.231 SO libspdk_event_nbd.so.6.0 00:02:31.231 SO libspdk_event_scsi.so.6.0 00:02:31.493 LIB libspdk_event_nvmf.a 00:02:31.493 SYMLINK libspdk_event_ublk.so 00:02:31.493 SYMLINK libspdk_event_nbd.so 00:02:31.493 SO libspdk_event_nvmf.so.6.0 00:02:31.493 SYMLINK libspdk_event_scsi.so 00:02:31.493 SYMLINK libspdk_event_nvmf.so 00:02:31.755 CC module/event/subsystems/iscsi/iscsi.o 00:02:31.755 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.015 LIB libspdk_event_vhost_scsi.a 00:02:32.015 LIB libspdk_event_iscsi.a 00:02:32.015 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.015 SO libspdk_event_iscsi.so.6.0 00:02:32.015 SYMLINK libspdk_event_vhost_scsi.so 00:02:32.015 SYMLINK libspdk_event_iscsi.so 00:02:32.276 SO libspdk.so.6.0 00:02:32.276 SYMLINK libspdk.so 00:02:32.538 CC test/rpc_client/rpc_client_test.o 00:02:32.538 TEST_HEADER include/spdk/accel.h 00:02:32.538 TEST_HEADER include/spdk/accel_module.h 00:02:32.538 TEST_HEADER include/spdk/assert.h 00:02:32.538 TEST_HEADER include/spdk/base64.h 00:02:32.538 TEST_HEADER include/spdk/barrier.h 00:02:32.538 TEST_HEADER include/spdk/bdev.h 00:02:32.538 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.538 TEST_HEADER include/spdk/bdev_module.h 00:02:32.538 TEST_HEADER include/spdk/bit_array.h 00:02:32.538 TEST_HEADER include/spdk/bit_pool.h 00:02:32.538 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.538 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.538 TEST_HEADER include/spdk/blob.h 00:02:32.538 TEST_HEADER include/spdk/blobfs.h 00:02:32.538 TEST_HEADER include/spdk/conf.h 00:02:32.538 TEST_HEADER include/spdk/cpuset.h 00:02:32.538 TEST_HEADER include/spdk/config.h 00:02:32.538 TEST_HEADER include/spdk/crc32.h 00:02:32.538 TEST_HEADER include/spdk/crc16.h 00:02:32.538 CXX app/trace/trace.o 00:02:32.538 TEST_HEADER include/spdk/dif.h 00:02:32.538 TEST_HEADER include/spdk/crc64.h 00:02:32.538 TEST_HEADER include/spdk/dma.h 00:02:32.538 CC app/trace_record/trace_record.o 00:02:32.538 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.538 TEST_HEADER include/spdk/endian.h 00:02:32.538 TEST_HEADER include/spdk/event.h 00:02:32.538 TEST_HEADER include/spdk/fd_group.h 00:02:32.538 TEST_HEADER include/spdk/env.h 00:02:32.538 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.538 CC app/spdk_top/spdk_top.o 00:02:32.538 TEST_HEADER include/spdk/fd.h 00:02:32.538 TEST_HEADER include/spdk/fsdev.h 00:02:32.538 TEST_HEADER include/spdk/file.h 00:02:32.538 CC app/spdk_nvme_identify/identify.o 00:02:32.538 TEST_HEADER include/spdk/fsdev_module.h 00:02:32.538 TEST_HEADER include/spdk/ftl.h 00:02:32.538 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:32.538 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.800 TEST_HEADER include/spdk/hexlify.h 00:02:32.800 TEST_HEADER include/spdk/histogram_data.h 00:02:32.800 CC app/spdk_lspci/spdk_lspci.o 00:02:32.800 TEST_HEADER include/spdk/idxd.h 00:02:32.800 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.800 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.800 CC app/spdk_nvme_perf/perf.o 00:02:32.800 TEST_HEADER include/spdk/init.h 00:02:32.800 TEST_HEADER include/spdk/ioat.h 00:02:32.800 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.800 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.800 TEST_HEADER include/spdk/json.h 00:02:32.800 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.800 TEST_HEADER include/spdk/keyring.h 00:02:32.800 TEST_HEADER include/spdk/keyring_module.h 00:02:32.800 TEST_HEADER include/spdk/log.h 00:02:32.800 TEST_HEADER include/spdk/likely.h 00:02:32.800 TEST_HEADER include/spdk/lvol.h 00:02:32.800 TEST_HEADER include/spdk/md5.h 00:02:32.800 TEST_HEADER include/spdk/memory.h 00:02:32.800 TEST_HEADER include/spdk/mmio.h 00:02:32.800 TEST_HEADER include/spdk/nbd.h 00:02:32.800 TEST_HEADER include/spdk/notify.h 00:02:32.800 TEST_HEADER include/spdk/net.h 00:02:32.800 TEST_HEADER include/spdk/nvme.h 00:02:32.800 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.800 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.800 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.800 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.800 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.800 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.800 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.800 CC app/nvmf_tgt/nvmf_main.o 00:02:32.800 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.800 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.800 TEST_HEADER include/spdk/nvmf.h 00:02:32.800 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.800 TEST_HEADER include/spdk/opal.h 00:02:32.800 TEST_HEADER include/spdk/opal_spec.h 00:02:32.800 TEST_HEADER include/spdk/pipe.h 00:02:32.800 TEST_HEADER include/spdk/pci_ids.h 00:02:32.800 CC app/spdk_dd/spdk_dd.o 00:02:32.800 TEST_HEADER include/spdk/reduce.h 00:02:32.800 TEST_HEADER include/spdk/queue.h 00:02:32.800 TEST_HEADER include/spdk/scsi.h 00:02:32.800 TEST_HEADER include/spdk/rpc.h 00:02:32.800 TEST_HEADER include/spdk/scheduler.h 00:02:32.800 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.800 TEST_HEADER include/spdk/sock.h 00:02:32.800 TEST_HEADER include/spdk/stdinc.h 00:02:32.800 TEST_HEADER include/spdk/string.h 00:02:32.800 TEST_HEADER include/spdk/thread.h 00:02:32.800 TEST_HEADER include/spdk/trace.h 00:02:32.800 TEST_HEADER include/spdk/trace_parser.h 00:02:32.800 TEST_HEADER include/spdk/tree.h 00:02:32.800 TEST_HEADER include/spdk/ublk.h 00:02:32.800 TEST_HEADER include/spdk/util.h 00:02:32.800 TEST_HEADER include/spdk/uuid.h 00:02:32.800 TEST_HEADER include/spdk/version.h 00:02:32.800 CC app/spdk_tgt/spdk_tgt.o 00:02:32.800 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:32.800 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:32.800 TEST_HEADER include/spdk/vhost.h 00:02:32.800 TEST_HEADER include/spdk/vmd.h 00:02:32.800 TEST_HEADER include/spdk/xor.h 00:02:32.800 TEST_HEADER include/spdk/zipf.h 00:02:32.800 CXX test/cpp_headers/accel.o 00:02:32.800 CXX test/cpp_headers/accel_module.o 00:02:32.800 CXX test/cpp_headers/assert.o 00:02:32.800 CXX test/cpp_headers/barrier.o 00:02:32.800 CXX test/cpp_headers/base64.o 00:02:32.800 CXX test/cpp_headers/bdev.o 00:02:32.800 CXX test/cpp_headers/bdev_module.o 00:02:32.800 CXX test/cpp_headers/bit_array.o 00:02:32.800 CXX test/cpp_headers/bdev_zone.o 00:02:32.800 CXX test/cpp_headers/bit_pool.o 00:02:32.801 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.801 CXX test/cpp_headers/blob_bdev.o 00:02:32.801 CXX test/cpp_headers/blob.o 00:02:32.801 CXX test/cpp_headers/blobfs.o 00:02:32.801 CXX test/cpp_headers/conf.o 00:02:32.801 CXX test/cpp_headers/config.o 00:02:32.801 CXX test/cpp_headers/crc16.o 00:02:32.801 CXX test/cpp_headers/cpuset.o 00:02:32.801 CXX test/cpp_headers/crc32.o 00:02:32.801 CXX test/cpp_headers/dif.o 00:02:32.801 CXX test/cpp_headers/crc64.o 00:02:32.801 CXX test/cpp_headers/dma.o 00:02:32.801 CXX test/cpp_headers/endian.o 00:02:32.801 CXX test/cpp_headers/env_dpdk.o 00:02:32.801 CXX test/cpp_headers/env.o 00:02:32.801 CXX test/cpp_headers/event.o 00:02:32.801 CXX test/cpp_headers/fd_group.o 00:02:32.801 CXX test/cpp_headers/fd.o 00:02:32.801 CXX test/cpp_headers/file.o 00:02:32.801 CXX test/cpp_headers/fsdev_module.o 00:02:32.801 CXX test/cpp_headers/fsdev.o 00:02:32.801 CXX test/cpp_headers/ftl.o 00:02:32.801 CXX test/cpp_headers/fuse_dispatcher.o 00:02:32.801 CXX test/cpp_headers/gpt_spec.o 00:02:32.801 CXX test/cpp_headers/idxd.o 00:02:32.801 CXX test/cpp_headers/hexlify.o 00:02:32.801 CXX test/cpp_headers/histogram_data.o 00:02:32.801 CXX test/cpp_headers/init.o 00:02:32.801 CXX test/cpp_headers/idxd_spec.o 00:02:32.801 CXX test/cpp_headers/ioat_spec.o 00:02:32.801 CXX test/cpp_headers/ioat.o 00:02:32.801 CXX test/cpp_headers/iscsi_spec.o 00:02:32.801 CXX test/cpp_headers/jsonrpc.o 00:02:32.801 CXX test/cpp_headers/keyring.o 00:02:32.801 CXX test/cpp_headers/json.o 00:02:32.801 CXX test/cpp_headers/likely.o 00:02:32.801 CXX test/cpp_headers/keyring_module.o 00:02:32.801 CXX test/cpp_headers/md5.o 00:02:32.801 CXX test/cpp_headers/log.o 00:02:32.801 CXX test/cpp_headers/memory.o 00:02:32.801 CC test/app/stub/stub.o 00:02:32.801 CXX test/cpp_headers/lvol.o 00:02:32.801 CXX test/cpp_headers/mmio.o 00:02:32.801 CXX test/cpp_headers/notify.o 00:02:32.801 CXX test/cpp_headers/nbd.o 00:02:32.801 CXX test/cpp_headers/net.o 00:02:32.801 CC test/app/histogram_perf/histogram_perf.o 00:02:32.801 CC test/env/memory/memory_ut.o 00:02:32.801 CXX test/cpp_headers/nvme.o 00:02:32.801 CXX test/cpp_headers/nvme_intel.o 00:02:32.801 CC test/app/jsoncat/jsoncat.o 00:02:32.801 CXX test/cpp_headers/nvme_spec.o 00:02:32.801 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:32.801 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.801 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.801 CXX test/cpp_headers/nvme_zns.o 00:02:32.801 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.801 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.801 CXX test/cpp_headers/nvmf_spec.o 00:02:32.801 CXX test/cpp_headers/nvmf.o 00:02:32.801 CXX test/cpp_headers/nvmf_transport.o 00:02:32.801 CC examples/util/zipf/zipf.o 00:02:32.801 CXX test/cpp_headers/opal.o 00:02:32.801 CC test/thread/poller_perf/poller_perf.o 00:02:32.801 CXX test/cpp_headers/opal_spec.o 00:02:32.801 CXX test/cpp_headers/pipe.o 00:02:32.801 CC test/env/pci/pci_ut.o 00:02:32.801 CXX test/cpp_headers/queue.o 00:02:32.801 CXX test/cpp_headers/pci_ids.o 00:02:32.801 CXX test/cpp_headers/rpc.o 00:02:32.801 CXX test/cpp_headers/reduce.o 00:02:32.801 CXX test/cpp_headers/scsi.o 00:02:32.801 CC examples/ioat/verify/verify.o 00:02:32.801 CXX test/cpp_headers/sock.o 00:02:32.801 CXX test/cpp_headers/scheduler.o 00:02:32.801 CXX test/cpp_headers/scsi_spec.o 00:02:32.801 CXX test/cpp_headers/string.o 00:02:32.801 CXX test/cpp_headers/stdinc.o 00:02:32.801 CXX test/cpp_headers/thread.o 00:02:32.801 CXX test/cpp_headers/trace.o 00:02:32.801 CXX test/cpp_headers/trace_parser.o 00:02:32.801 CXX test/cpp_headers/tree.o 00:02:32.801 LINK rpc_client_test 00:02:32.801 CXX test/cpp_headers/ublk.o 00:02:32.801 CXX test/cpp_headers/util.o 00:02:32.801 CXX test/cpp_headers/uuid.o 00:02:32.801 CXX test/cpp_headers/version.o 00:02:32.801 CXX test/cpp_headers/vfio_user_pci.o 00:02:32.801 CXX test/cpp_headers/vhost.o 00:02:32.801 CXX test/cpp_headers/vfio_user_spec.o 00:02:32.801 CC examples/ioat/perf/perf.o 00:02:32.801 CXX test/cpp_headers/xor.o 00:02:32.801 CC test/env/vtophys/vtophys.o 00:02:32.801 CXX test/cpp_headers/vmd.o 00:02:32.801 CXX test/cpp_headers/zipf.o 00:02:32.801 CC test/app/bdev_svc/bdev_svc.o 00:02:33.068 CC test/dma/test_dma/test_dma.o 00:02:33.068 CC app/fio/nvme/fio_plugin.o 00:02:33.068 LINK spdk_lspci 00:02:33.068 CC app/fio/bdev/fio_plugin.o 00:02:33.068 LINK interrupt_tgt 00:02:33.068 LINK spdk_nvme_discover 00:02:33.068 LINK iscsi_tgt 00:02:33.068 LINK spdk_trace_record 00:02:33.328 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:33.328 CC test/env/mem_callbacks/mem_callbacks.o 00:02:33.328 LINK nvmf_tgt 00:02:33.328 LINK spdk_tgt 00:02:33.328 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:33.328 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:33.328 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:33.328 LINK jsoncat 00:02:33.328 LINK poller_perf 00:02:33.328 LINK spdk_dd 00:02:33.328 LINK spdk_trace 00:02:33.586 LINK vtophys 00:02:33.586 LINK zipf 00:02:33.586 LINK histogram_perf 00:02:33.586 LINK env_dpdk_post_init 00:02:33.586 LINK stub 00:02:33.586 LINK ioat_perf 00:02:33.586 LINK bdev_svc 00:02:33.586 LINK verify 00:02:33.845 LINK vhost_fuzz 00:02:33.846 LINK pci_ut 00:02:33.846 LINK nvme_fuzz 00:02:33.846 CC app/vhost/vhost.o 00:02:33.846 CC test/event/reactor_perf/reactor_perf.o 00:02:33.846 CC test/event/reactor/reactor.o 00:02:33.846 CC test/event/event_perf/event_perf.o 00:02:33.846 LINK spdk_bdev 00:02:33.846 LINK spdk_nvme 00:02:33.846 CC test/event/app_repeat/app_repeat.o 00:02:33.846 CC test/event/scheduler/scheduler.o 00:02:33.846 LINK test_dma 00:02:33.846 CC examples/sock/hello_world/hello_sock.o 00:02:33.846 CC examples/vmd/led/led.o 00:02:33.846 CC examples/vmd/lsvmd/lsvmd.o 00:02:33.846 CC examples/idxd/perf/perf.o 00:02:33.846 LINK mem_callbacks 00:02:34.105 LINK reactor_perf 00:02:34.105 CC examples/thread/thread/thread_ex.o 00:02:34.105 LINK reactor 00:02:34.105 LINK event_perf 00:02:34.105 LINK spdk_nvme_identify 00:02:34.105 LINK spdk_nvme_perf 00:02:34.105 LINK vhost 00:02:34.105 LINK app_repeat 00:02:34.105 LINK spdk_top 00:02:34.105 LINK led 00:02:34.105 LINK lsvmd 00:02:34.105 LINK scheduler 00:02:34.105 LINK hello_sock 00:02:34.365 LINK idxd_perf 00:02:34.365 LINK thread 00:02:34.365 CC test/nvme/sgl/sgl.o 00:02:34.365 CC test/accel/dif/dif.o 00:02:34.365 CC test/nvme/reserve/reserve.o 00:02:34.365 CC test/nvme/reset/reset.o 00:02:34.365 CC test/nvme/err_injection/err_injection.o 00:02:34.365 CC test/nvme/aer/aer.o 00:02:34.365 CC test/nvme/e2edp/nvme_dp.o 00:02:34.365 LINK memory_ut 00:02:34.365 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:34.365 CC test/nvme/startup/startup.o 00:02:34.365 CC test/nvme/overhead/overhead.o 00:02:34.365 CC test/nvme/simple_copy/simple_copy.o 00:02:34.365 CC test/nvme/compliance/nvme_compliance.o 00:02:34.365 CC test/nvme/fdp/fdp.o 00:02:34.365 CC test/nvme/boot_partition/boot_partition.o 00:02:34.365 CC test/nvme/connect_stress/connect_stress.o 00:02:34.365 CC test/nvme/cuse/cuse.o 00:02:34.365 CC test/nvme/fused_ordering/fused_ordering.o 00:02:34.365 CC test/blobfs/mkfs/mkfs.o 00:02:34.626 CC test/lvol/esnap/esnap.o 00:02:34.626 CC examples/nvme/hello_world/hello_world.o 00:02:34.626 LINK boot_partition 00:02:34.626 CC examples/nvme/reconnect/reconnect.o 00:02:34.626 LINK reserve 00:02:34.626 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:34.626 LINK err_injection 00:02:34.626 LINK connect_stress 00:02:34.626 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:34.626 CC examples/nvme/arbitration/arbitration.o 00:02:34.626 LINK startup 00:02:34.626 CC examples/nvme/hotplug/hotplug.o 00:02:34.626 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:34.626 CC examples/nvme/abort/abort.o 00:02:34.626 LINK doorbell_aers 00:02:34.626 LINK fused_ordering 00:02:34.626 LINK sgl 00:02:34.626 LINK reset 00:02:34.887 LINK aer 00:02:34.887 LINK mkfs 00:02:34.887 LINK simple_copy 00:02:34.887 LINK overhead 00:02:34.887 LINK nvme_dp 00:02:34.887 LINK nvme_compliance 00:02:34.887 LINK fdp 00:02:34.887 LINK iscsi_fuzz 00:02:34.887 CC examples/accel/perf/accel_perf.o 00:02:34.887 CC examples/blob/cli/blobcli.o 00:02:34.887 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:34.887 CC examples/blob/hello_world/hello_blob.o 00:02:34.887 LINK cmb_copy 00:02:34.887 LINK pmr_persistence 00:02:34.887 LINK hello_world 00:02:34.887 LINK hotplug 00:02:35.149 LINK dif 00:02:35.149 LINK arbitration 00:02:35.149 LINK reconnect 00:02:35.149 LINK abort 00:02:35.149 LINK hello_blob 00:02:35.149 LINK nvme_manage 00:02:35.149 LINK hello_fsdev 00:02:35.410 LINK accel_perf 00:02:35.410 LINK blobcli 00:02:35.670 CC test/bdev/bdevio/bdevio.o 00:02:35.670 LINK cuse 00:02:35.930 CC examples/bdev/hello_world/hello_bdev.o 00:02:35.930 CC examples/bdev/bdevperf/bdevperf.o 00:02:36.190 LINK bdevio 00:02:36.190 LINK hello_bdev 00:02:36.761 LINK bdevperf 00:02:37.333 CC examples/nvmf/nvmf/nvmf.o 00:02:37.594 LINK nvmf 00:02:38.976 LINK esnap 00:02:39.548 00:02:39.548 real 0m54.176s 00:02:39.548 user 7m45.796s 00:02:39.548 sys 4m22.434s 00:02:39.548 10:44:30 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:39.548 10:44:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:39.548 ************************************ 00:02:39.548 END TEST make 00:02:39.548 ************************************ 00:02:39.548 10:44:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:39.548 10:44:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:39.548 10:44:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:39.548 10:44:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.548 10:44:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:39.548 10:44:30 -- pm/common@44 -- $ pid=2927207 00:02:39.548 10:44:30 -- pm/common@50 -- $ kill -TERM 2927207 00:02:39.548 10:44:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.548 10:44:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:39.548 10:44:30 -- pm/common@44 -- $ pid=2927208 00:02:39.548 10:44:30 -- pm/common@50 -- $ kill -TERM 2927208 00:02:39.548 10:44:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.548 10:44:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:39.548 10:44:30 -- pm/common@44 -- $ pid=2927210 00:02:39.548 10:44:30 -- pm/common@50 -- $ kill -TERM 2927210 00:02:39.548 10:44:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.548 10:44:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:39.548 10:44:30 -- pm/common@44 -- $ pid=2927237 00:02:39.548 10:44:30 -- pm/common@50 -- $ sudo -E kill -TERM 2927237 00:02:39.548 10:44:30 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:39.548 10:44:30 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:39.548 10:44:30 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:39.548 10:44:30 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:39.548 10:44:30 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:39.548 10:44:30 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:39.548 10:44:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:39.548 10:44:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:39.548 10:44:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:39.548 10:44:30 -- scripts/common.sh@336 -- # IFS=.-: 00:02:39.548 10:44:30 -- scripts/common.sh@336 -- # read -ra ver1 00:02:39.548 10:44:30 -- scripts/common.sh@337 -- # IFS=.-: 00:02:39.548 10:44:30 -- scripts/common.sh@337 -- # read -ra ver2 00:02:39.548 10:44:30 -- scripts/common.sh@338 -- # local 'op=<' 00:02:39.548 10:44:30 -- scripts/common.sh@340 -- # ver1_l=2 00:02:39.548 10:44:30 -- scripts/common.sh@341 -- # ver2_l=1 00:02:39.548 10:44:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:39.548 10:44:30 -- scripts/common.sh@344 -- # case "$op" in 00:02:39.548 10:44:30 -- scripts/common.sh@345 -- # : 1 00:02:39.548 10:44:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:39.548 10:44:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:39.548 10:44:30 -- scripts/common.sh@365 -- # decimal 1 00:02:39.548 10:44:30 -- scripts/common.sh@353 -- # local d=1 00:02:39.548 10:44:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:39.548 10:44:30 -- scripts/common.sh@355 -- # echo 1 00:02:39.548 10:44:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:39.811 10:44:30 -- scripts/common.sh@366 -- # decimal 2 00:02:39.811 10:44:30 -- scripts/common.sh@353 -- # local d=2 00:02:39.811 10:44:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:39.811 10:44:30 -- scripts/common.sh@355 -- # echo 2 00:02:39.811 10:44:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:39.811 10:44:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:39.811 10:44:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:39.811 10:44:30 -- scripts/common.sh@368 -- # return 0 00:02:39.811 10:44:30 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:39.811 10:44:30 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.811 --rc genhtml_branch_coverage=1 00:02:39.811 --rc genhtml_function_coverage=1 00:02:39.811 --rc genhtml_legend=1 00:02:39.811 --rc geninfo_all_blocks=1 00:02:39.811 --rc geninfo_unexecuted_blocks=1 00:02:39.811 00:02:39.811 ' 00:02:39.811 10:44:30 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.811 --rc genhtml_branch_coverage=1 00:02:39.811 --rc genhtml_function_coverage=1 00:02:39.811 --rc genhtml_legend=1 00:02:39.811 --rc geninfo_all_blocks=1 00:02:39.811 --rc geninfo_unexecuted_blocks=1 00:02:39.811 00:02:39.811 ' 00:02:39.811 10:44:30 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.811 --rc genhtml_branch_coverage=1 00:02:39.811 --rc genhtml_function_coverage=1 00:02:39.811 --rc genhtml_legend=1 00:02:39.811 --rc geninfo_all_blocks=1 00:02:39.811 --rc geninfo_unexecuted_blocks=1 00:02:39.811 00:02:39.811 ' 00:02:39.811 10:44:30 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.811 --rc genhtml_branch_coverage=1 00:02:39.811 --rc genhtml_function_coverage=1 00:02:39.811 --rc genhtml_legend=1 00:02:39.811 --rc geninfo_all_blocks=1 00:02:39.811 --rc geninfo_unexecuted_blocks=1 00:02:39.811 00:02:39.811 ' 00:02:39.811 10:44:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:39.811 10:44:30 -- nvmf/common.sh@7 -- # uname -s 00:02:39.811 10:44:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:39.811 10:44:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:39.811 10:44:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:39.811 10:44:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:39.811 10:44:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:39.811 10:44:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:39.811 10:44:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:39.811 10:44:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:39.811 10:44:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:39.811 10:44:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:39.811 10:44:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:39.811 10:44:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:39.811 10:44:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:39.811 10:44:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:39.811 10:44:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:39.811 10:44:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:39.811 10:44:30 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:39.811 10:44:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:39.811 10:44:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:39.811 10:44:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:39.811 10:44:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:39.811 10:44:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.811 10:44:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.811 10:44:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.811 10:44:30 -- paths/export.sh@5 -- # export PATH 00:02:39.811 10:44:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.811 10:44:31 -- nvmf/common.sh@51 -- # : 0 00:02:39.811 10:44:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:39.811 10:44:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:39.811 10:44:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:39.811 10:44:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:39.811 10:44:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:39.811 10:44:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:39.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:39.811 10:44:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:39.811 10:44:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:39.811 10:44:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:39.811 10:44:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:39.811 10:44:31 -- spdk/autotest.sh@32 -- # uname -s 00:02:39.811 10:44:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:39.811 10:44:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:39.811 10:44:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.811 10:44:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:39.811 10:44:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.811 10:44:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:39.811 10:44:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:39.811 10:44:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:39.811 10:44:31 -- spdk/autotest.sh@48 -- # udevadm_pid=2992576 00:02:39.811 10:44:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:39.811 10:44:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:39.811 10:44:31 -- pm/common@17 -- # local monitor 00:02:39.811 10:44:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.811 10:44:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.811 10:44:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.811 10:44:31 -- pm/common@21 -- # date +%s 00:02:39.811 10:44:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.811 10:44:31 -- pm/common@21 -- # date +%s 00:02:39.811 10:44:31 -- pm/common@25 -- # sleep 1 00:02:39.811 10:44:31 -- pm/common@21 -- # date +%s 00:02:39.811 10:44:31 -- pm/common@21 -- # date +%s 00:02:39.811 10:44:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730886271 00:02:39.811 10:44:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730886271 00:02:39.811 10:44:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730886271 00:02:39.812 10:44:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730886271 00:02:39.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730886271_collect-vmstat.pm.log 00:02:39.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730886271_collect-cpu-load.pm.log 00:02:39.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730886271_collect-cpu-temp.pm.log 00:02:39.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730886271_collect-bmc-pm.bmc.pm.log 00:02:40.751 10:44:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:40.751 10:44:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:40.751 10:44:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:40.751 10:44:32 -- common/autotest_common.sh@10 -- # set +x 00:02:40.751 10:44:32 -- spdk/autotest.sh@59 -- # create_test_list 00:02:40.751 10:44:32 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:40.752 10:44:32 -- common/autotest_common.sh@10 -- # set +x 00:02:40.752 10:44:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:40.752 10:44:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.752 10:44:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.752 10:44:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:40.752 10:44:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.752 10:44:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:40.752 10:44:32 -- common/autotest_common.sh@1455 -- # uname 00:02:40.752 10:44:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:40.752 10:44:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:40.752 10:44:32 -- common/autotest_common.sh@1475 -- # uname 00:02:40.752 10:44:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:40.752 10:44:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:40.752 10:44:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:41.011 lcov: LCOV version 1.15 00:02:41.011 10:44:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:03.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:03.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.273 10:45:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:11.273 10:45:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:11.273 10:45:02 -- common/autotest_common.sh@10 -- # set +x 00:03:11.273 10:45:02 -- spdk/autotest.sh@78 -- # rm -f 00:03:11.273 10:45:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.573 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:14.573 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.573 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.573 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:14.574 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:14.574 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:14.834 10:45:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:14.834 10:45:06 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:14.834 10:45:06 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:14.834 10:45:06 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:14.834 10:45:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.834 10:45:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:14.834 10:45:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:14.834 10:45:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.834 10:45:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.834 10:45:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:14.834 10:45:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.834 10:45:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.834 10:45:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:14.834 10:45:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:14.834 10:45:06 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.834 No valid GPT data, bailing 00:03:14.834 10:45:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.834 10:45:06 -- scripts/common.sh@394 -- # pt= 00:03:14.834 10:45:06 -- scripts/common.sh@395 -- # return 1 00:03:14.834 10:45:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:15.095 1+0 records in 00:03:15.095 1+0 records out 00:03:15.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00202371 s, 518 MB/s 00:03:15.095 10:45:06 -- spdk/autotest.sh@105 -- # sync 00:03:15.095 10:45:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:15.095 10:45:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:15.095 10:45:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.092 10:45:14 -- spdk/autotest.sh@111 -- # uname -s 00:03:25.092 10:45:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:25.092 10:45:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:25.092 10:45:14 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:27.003 Hugepages 00:03:27.003 node hugesize free / total 00:03:27.003 node0 1048576kB 0 / 0 00:03:27.003 node0 2048kB 0 / 0 00:03:27.003 node1 1048576kB 0 / 0 00:03:27.003 node1 2048kB 0 / 0 00:03:27.003 00:03:27.003 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:27.003 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:27.004 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:27.004 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:27.004 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:27.004 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:27.004 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:27.004 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:27.004 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:27.004 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:27.004 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:27.004 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:27.004 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:27.004 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:27.004 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:27.004 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:27.004 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:27.004 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:27.004 10:45:18 -- spdk/autotest.sh@117 -- # uname -s 00:03:27.004 10:45:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:27.004 10:45:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:27.004 10:45:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.306 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.306 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:32.222 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:32.222 10:45:23 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:33.165 10:45:24 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:33.165 10:45:24 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:33.165 10:45:24 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:33.165 10:45:24 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:33.165 10:45:24 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:33.165 10:45:24 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:33.165 10:45:24 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.165 10:45:24 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:33.165 10:45:24 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:33.426 10:45:24 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:33.426 10:45:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:33.426 10:45:24 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.726 Waiting for block devices as requested 00:03:36.726 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:36.726 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:36.726 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:36.726 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:36.726 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:36.726 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:36.726 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:36.726 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:36.986 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:36.986 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:37.246 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:37.246 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:37.246 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:37.246 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:37.508 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:37.508 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:37.508 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:37.767 10:45:29 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:37.767 10:45:29 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:37.767 10:45:29 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:37.767 10:45:29 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:37.767 10:45:29 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:37.767 10:45:29 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:37.767 10:45:29 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:37.767 10:45:29 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:37.767 10:45:29 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:37.767 10:45:29 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:37.767 10:45:29 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:37.767 10:45:29 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:37.767 10:45:29 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:37.767 10:45:29 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:37.767 10:45:29 -- common/autotest_common.sh@1541 -- # continue 00:03:37.767 10:45:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:37.767 10:45:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:37.767 10:45:29 -- common/autotest_common.sh@10 -- # set +x 00:03:38.029 10:45:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:38.029 10:45:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.029 10:45:29 -- common/autotest_common.sh@10 -- # set +x 00:03:38.029 10:45:29 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.577 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:40.838 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:41.411 10:45:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:41.411 10:45:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:41.411 10:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.411 10:45:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:41.411 10:45:32 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:41.411 10:45:32 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:41.411 10:45:32 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:41.411 10:45:32 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:41.411 10:45:32 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:41.411 10:45:32 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:41.411 10:45:32 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:41.411 10:45:32 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:41.411 10:45:32 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:41.411 10:45:32 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.411 10:45:32 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:41.411 10:45:32 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:41.411 10:45:32 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:41.411 10:45:32 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:41.411 10:45:32 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:41.411 10:45:32 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:41.411 10:45:32 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:41.411 10:45:32 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:41.411 10:45:32 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:41.411 10:45:32 -- common/autotest_common.sh@1570 -- # return 0 00:03:41.411 10:45:32 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:41.411 10:45:32 -- common/autotest_common.sh@1578 -- # return 0 00:03:41.411 10:45:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:41.411 10:45:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:41.411 10:45:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:41.411 10:45:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:41.411 10:45:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:41.411 10:45:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:41.411 10:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.411 10:45:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:41.411 10:45:32 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:41.411 10:45:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.411 10:45:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.411 10:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:41.411 ************************************ 00:03:41.411 START TEST env 00:03:41.411 ************************************ 00:03:41.411 10:45:32 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:41.673 * Looking for test storage... 00:03:41.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.673 10:45:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.673 10:45:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.673 10:45:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.673 10:45:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.673 10:45:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.673 10:45:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.673 10:45:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.673 10:45:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.673 10:45:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.673 10:45:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.673 10:45:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.673 10:45:32 env -- scripts/common.sh@344 -- # case "$op" in 00:03:41.673 10:45:32 env -- scripts/common.sh@345 -- # : 1 00:03:41.673 10:45:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.673 10:45:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.673 10:45:32 env -- scripts/common.sh@365 -- # decimal 1 00:03:41.673 10:45:32 env -- scripts/common.sh@353 -- # local d=1 00:03:41.673 10:45:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.673 10:45:32 env -- scripts/common.sh@355 -- # echo 1 00:03:41.673 10:45:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.673 10:45:32 env -- scripts/common.sh@366 -- # decimal 2 00:03:41.673 10:45:32 env -- scripts/common.sh@353 -- # local d=2 00:03:41.673 10:45:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.673 10:45:32 env -- scripts/common.sh@355 -- # echo 2 00:03:41.673 10:45:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.673 10:45:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.673 10:45:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.673 10:45:32 env -- scripts/common.sh@368 -- # return 0 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.673 --rc genhtml_branch_coverage=1 00:03:41.673 --rc genhtml_function_coverage=1 00:03:41.673 --rc genhtml_legend=1 00:03:41.673 --rc geninfo_all_blocks=1 00:03:41.673 --rc geninfo_unexecuted_blocks=1 00:03:41.673 00:03:41.673 ' 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.673 --rc genhtml_branch_coverage=1 00:03:41.673 --rc genhtml_function_coverage=1 00:03:41.673 --rc genhtml_legend=1 00:03:41.673 --rc geninfo_all_blocks=1 00:03:41.673 --rc geninfo_unexecuted_blocks=1 00:03:41.673 00:03:41.673 ' 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.673 --rc genhtml_branch_coverage=1 00:03:41.673 --rc genhtml_function_coverage=1 00:03:41.673 --rc genhtml_legend=1 00:03:41.673 --rc geninfo_all_blocks=1 00:03:41.673 --rc geninfo_unexecuted_blocks=1 00:03:41.673 00:03:41.673 ' 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.673 --rc genhtml_branch_coverage=1 00:03:41.673 --rc genhtml_function_coverage=1 00:03:41.673 --rc genhtml_legend=1 00:03:41.673 --rc geninfo_all_blocks=1 00:03:41.673 --rc geninfo_unexecuted_blocks=1 00:03:41.673 00:03:41.673 ' 00:03:41.673 10:45:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.673 10:45:32 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.673 10:45:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.673 ************************************ 00:03:41.673 START TEST env_memory 00:03:41.673 ************************************ 00:03:41.673 10:45:32 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:41.673 00:03:41.673 00:03:41.673 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.673 http://cunit.sourceforge.net/ 00:03:41.673 00:03:41.673 00:03:41.673 Suite: memory 00:03:41.673 Test: alloc and free memory map ...[2024-11-06 10:45:33.028484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:41.673 passed 00:03:41.673 Test: mem map translation ...[2024-11-06 10:45:33.053854] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:41.673 [2024-11-06 10:45:33.053872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:41.673 [2024-11-06 10:45:33.053918] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:41.673 [2024-11-06 10:45:33.053925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:41.935 passed 00:03:41.935 Test: mem map registration ...[2024-11-06 10:45:33.108975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:41.935 [2024-11-06 10:45:33.108995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:41.935 passed 00:03:41.935 Test: mem map adjacent registrations ...passed 00:03:41.935 00:03:41.935 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.935 suites 1 1 n/a 0 0 00:03:41.935 tests 4 4 4 0 0 00:03:41.935 asserts 152 152 152 0 n/a 00:03:41.935 00:03:41.935 Elapsed time = 0.193 seconds 00:03:41.935 00:03:41.935 real 0m0.207s 00:03:41.935 user 0m0.196s 00:03:41.935 sys 0m0.010s 00:03:41.935 10:45:33 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.935 10:45:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:41.935 ************************************ 00:03:41.935 END TEST env_memory 00:03:41.935 ************************************ 00:03:41.935 10:45:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:41.935 10:45:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.935 10:45:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.935 10:45:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.935 ************************************ 00:03:41.935 START TEST env_vtophys 00:03:41.935 ************************************ 00:03:41.935 10:45:33 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:41.935 EAL: lib.eal log level changed from notice to debug 00:03:41.935 EAL: Detected lcore 0 as core 0 on socket 0 00:03:41.935 EAL: Detected lcore 1 as core 1 on socket 0 00:03:41.935 EAL: Detected lcore 2 as core 2 on socket 0 00:03:41.935 EAL: Detected lcore 3 as core 3 on socket 0 00:03:41.935 EAL: Detected lcore 4 as core 4 on socket 0 00:03:41.935 EAL: Detected lcore 5 as core 5 on socket 0 00:03:41.935 EAL: Detected lcore 6 as core 6 on socket 0 00:03:41.935 EAL: Detected lcore 7 as core 7 on socket 0 00:03:41.935 EAL: Detected lcore 8 as core 8 on socket 0 00:03:41.935 EAL: Detected lcore 9 as core 9 on socket 0 00:03:41.935 EAL: Detected lcore 10 as core 10 on socket 0 00:03:41.935 EAL: Detected lcore 11 as core 11 on socket 0 00:03:41.935 EAL: Detected lcore 12 as core 12 on socket 0 00:03:41.935 EAL: Detected lcore 13 as core 13 on socket 0 00:03:41.935 EAL: Detected lcore 14 as core 14 on socket 0 00:03:41.935 EAL: Detected lcore 15 as core 15 on socket 0 00:03:41.935 EAL: Detected lcore 16 as core 16 on socket 0 00:03:41.935 EAL: Detected lcore 17 as core 17 on socket 0 00:03:41.935 EAL: Detected lcore 18 as core 18 on socket 0 00:03:41.935 EAL: Detected lcore 19 as core 19 on socket 0 00:03:41.935 EAL: Detected lcore 20 as core 20 on socket 0 00:03:41.935 EAL: Detected lcore 21 as core 21 on socket 0 00:03:41.935 EAL: Detected lcore 22 as core 22 on socket 0 00:03:41.935 EAL: Detected lcore 23 as core 23 on socket 0 00:03:41.935 EAL: Detected lcore 24 as core 24 on socket 0 00:03:41.935 EAL: Detected lcore 25 as core 25 on socket 0 00:03:41.935 EAL: Detected lcore 26 as core 26 on socket 0 00:03:41.935 EAL: Detected lcore 27 as core 27 on socket 0 00:03:41.935 EAL: Detected lcore 28 as core 28 on socket 0 00:03:41.935 EAL: Detected lcore 29 as core 29 on socket 0 00:03:41.935 EAL: Detected lcore 30 as core 30 on socket 0 00:03:41.935 EAL: Detected lcore 31 as core 31 on socket 0 00:03:41.935 EAL: Detected lcore 32 as core 32 on socket 0 00:03:41.935 EAL: Detected lcore 33 as core 33 on socket 0 00:03:41.935 EAL: Detected lcore 34 as core 34 on socket 0 00:03:41.935 EAL: Detected lcore 35 as core 35 on socket 0 00:03:41.935 EAL: Detected lcore 36 as core 0 on socket 1 00:03:41.935 EAL: Detected lcore 37 as core 1 on socket 1 00:03:41.935 EAL: Detected lcore 38 as core 2 on socket 1 00:03:41.935 EAL: Detected lcore 39 as core 3 on socket 1 00:03:41.935 EAL: Detected lcore 40 as core 4 on socket 1 00:03:41.935 EAL: Detected lcore 41 as core 5 on socket 1 00:03:41.935 EAL: Detected lcore 42 as core 6 on socket 1 00:03:41.935 EAL: Detected lcore 43 as core 7 on socket 1 00:03:41.935 EAL: Detected lcore 44 as core 8 on socket 1 00:03:41.935 EAL: Detected lcore 45 as core 9 on socket 1 00:03:41.935 EAL: Detected lcore 46 as core 10 on socket 1 00:03:41.935 EAL: Detected lcore 47 as core 11 on socket 1 00:03:41.935 EAL: Detected lcore 48 as core 12 on socket 1 00:03:41.935 EAL: Detected lcore 49 as core 13 on socket 1 00:03:41.935 EAL: Detected lcore 50 as core 14 on socket 1 00:03:41.935 EAL: Detected lcore 51 as core 15 on socket 1 00:03:41.935 EAL: Detected lcore 52 as core 16 on socket 1 00:03:41.935 EAL: Detected lcore 53 as core 17 on socket 1 00:03:41.935 EAL: Detected lcore 54 as core 18 on socket 1 00:03:41.935 EAL: Detected lcore 55 as core 19 on socket 1 00:03:41.935 EAL: Detected lcore 56 as core 20 on socket 1 00:03:41.935 EAL: Detected lcore 57 as core 21 on socket 1 00:03:41.935 EAL: Detected lcore 58 as core 22 on socket 1 00:03:41.935 EAL: Detected lcore 59 as core 23 on socket 1 00:03:41.935 EAL: Detected lcore 60 as core 24 on socket 1 00:03:41.935 EAL: Detected lcore 61 as core 25 on socket 1 00:03:41.935 EAL: Detected lcore 62 as core 26 on socket 1 00:03:41.935 EAL: Detected lcore 63 as core 27 on socket 1 00:03:41.935 EAL: Detected lcore 64 as core 28 on socket 1 00:03:41.935 EAL: Detected lcore 65 as core 29 on socket 1 00:03:41.935 EAL: Detected lcore 66 as core 30 on socket 1 00:03:41.935 EAL: Detected lcore 67 as core 31 on socket 1 00:03:41.935 EAL: Detected lcore 68 as core 32 on socket 1 00:03:41.935 EAL: Detected lcore 69 as core 33 on socket 1 00:03:41.935 EAL: Detected lcore 70 as core 34 on socket 1 00:03:41.935 EAL: Detected lcore 71 as core 35 on socket 1 00:03:41.935 EAL: Detected lcore 72 as core 0 on socket 0 00:03:41.935 EAL: Detected lcore 73 as core 1 on socket 0 00:03:41.935 EAL: Detected lcore 74 as core 2 on socket 0 00:03:41.935 EAL: Detected lcore 75 as core 3 on socket 0 00:03:41.935 EAL: Detected lcore 76 as core 4 on socket 0 00:03:41.935 EAL: Detected lcore 77 as core 5 on socket 0 00:03:41.935 EAL: Detected lcore 78 as core 6 on socket 0 00:03:41.935 EAL: Detected lcore 79 as core 7 on socket 0 00:03:41.935 EAL: Detected lcore 80 as core 8 on socket 0 00:03:41.935 EAL: Detected lcore 81 as core 9 on socket 0 00:03:41.935 EAL: Detected lcore 82 as core 10 on socket 0 00:03:41.936 EAL: Detected lcore 83 as core 11 on socket 0 00:03:41.936 EAL: Detected lcore 84 as core 12 on socket 0 00:03:41.936 EAL: Detected lcore 85 as core 13 on socket 0 00:03:41.936 EAL: Detected lcore 86 as core 14 on socket 0 00:03:41.936 EAL: Detected lcore 87 as core 15 on socket 0 00:03:41.936 EAL: Detected lcore 88 as core 16 on socket 0 00:03:41.936 EAL: Detected lcore 89 as core 17 on socket 0 00:03:41.936 EAL: Detected lcore 90 as core 18 on socket 0 00:03:41.936 EAL: Detected lcore 91 as core 19 on socket 0 00:03:41.936 EAL: Detected lcore 92 as core 20 on socket 0 00:03:41.936 EAL: Detected lcore 93 as core 21 on socket 0 00:03:41.936 EAL: Detected lcore 94 as core 22 on socket 0 00:03:41.936 EAL: Detected lcore 95 as core 23 on socket 0 00:03:41.936 EAL: Detected lcore 96 as core 24 on socket 0 00:03:41.936 EAL: Detected lcore 97 as core 25 on socket 0 00:03:41.936 EAL: Detected lcore 98 as core 26 on socket 0 00:03:41.936 EAL: Detected lcore 99 as core 27 on socket 0 00:03:41.936 EAL: Detected lcore 100 as core 28 on socket 0 00:03:41.936 EAL: Detected lcore 101 as core 29 on socket 0 00:03:41.936 EAL: Detected lcore 102 as core 30 on socket 0 00:03:41.936 EAL: Detected lcore 103 as core 31 on socket 0 00:03:41.936 EAL: Detected lcore 104 as core 32 on socket 0 00:03:41.936 EAL: Detected lcore 105 as core 33 on socket 0 00:03:41.936 EAL: Detected lcore 106 as core 34 on socket 0 00:03:41.936 EAL: Detected lcore 107 as core 35 on socket 0 00:03:41.936 EAL: Detected lcore 108 as core 0 on socket 1 00:03:41.936 EAL: Detected lcore 109 as core 1 on socket 1 00:03:41.936 EAL: Detected lcore 110 as core 2 on socket 1 00:03:41.936 EAL: Detected lcore 111 as core 3 on socket 1 00:03:41.936 EAL: Detected lcore 112 as core 4 on socket 1 00:03:41.936 EAL: Detected lcore 113 as core 5 on socket 1 00:03:41.936 EAL: Detected lcore 114 as core 6 on socket 1 00:03:41.936 EAL: Detected lcore 115 as core 7 on socket 1 00:03:41.936 EAL: Detected lcore 116 as core 8 on socket 1 00:03:41.936 EAL: Detected lcore 117 as core 9 on socket 1 00:03:41.936 EAL: Detected lcore 118 as core 10 on socket 1 00:03:41.936 EAL: Detected lcore 119 as core 11 on socket 1 00:03:41.936 EAL: Detected lcore 120 as core 12 on socket 1 00:03:41.936 EAL: Detected lcore 121 as core 13 on socket 1 00:03:41.936 EAL: Detected lcore 122 as core 14 on socket 1 00:03:41.936 EAL: Detected lcore 123 as core 15 on socket 1 00:03:41.936 EAL: Detected lcore 124 as core 16 on socket 1 00:03:41.936 EAL: Detected lcore 125 as core 17 on socket 1 00:03:41.936 EAL: Detected lcore 126 as core 18 on socket 1 00:03:41.936 EAL: Detected lcore 127 as core 19 on socket 1 00:03:41.936 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:41.936 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:41.936 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:41.936 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:41.936 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:41.936 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:41.936 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:41.936 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:41.936 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:41.936 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:41.936 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:41.936 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:41.936 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:41.936 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:41.936 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:41.936 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:41.936 EAL: Maximum logical cores by configuration: 128 00:03:41.936 EAL: Detected CPU lcores: 128 00:03:41.936 EAL: Detected NUMA nodes: 2 00:03:41.936 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:41.936 EAL: Detected shared linkage of DPDK 00:03:41.936 EAL: No shared files mode enabled, IPC will be disabled 00:03:41.936 EAL: Bus pci wants IOVA as 'DC' 00:03:41.936 EAL: Buses did not request a specific IOVA mode. 00:03:41.936 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:41.936 EAL: Selected IOVA mode 'VA' 00:03:41.936 EAL: Probing VFIO support... 00:03:41.936 EAL: IOMMU type 1 (Type 1) is supported 00:03:41.936 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:41.936 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:41.936 EAL: VFIO support initialized 00:03:41.936 EAL: Ask a virtual area of 0x2e000 bytes 00:03:41.936 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:41.936 EAL: Setting up physically contiguous memory... 00:03:41.936 EAL: Setting maximum number of open files to 524288 00:03:41.936 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:41.936 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:41.936 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:41.936 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:41.936 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.936 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:41.936 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.936 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.936 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:41.936 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:41.936 EAL: Hugepages will be freed exactly as allocated. 00:03:41.936 EAL: No shared files mode enabled, IPC is disabled 00:03:41.936 EAL: No shared files mode enabled, IPC is disabled 00:03:41.936 EAL: TSC frequency is ~2400000 KHz 00:03:41.936 EAL: Main lcore 0 is ready (tid=7fcd676eaa00;cpuset=[0]) 00:03:41.936 EAL: Trying to obtain current memory policy. 00:03:41.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.936 EAL: Restoring previous memory policy: 0 00:03:41.936 EAL: request: mp_malloc_sync 00:03:41.936 EAL: No shared files mode enabled, IPC is disabled 00:03:41.936 EAL: Heap on socket 0 was expanded by 2MB 00:03:41.936 EAL: No shared files mode enabled, IPC is disabled 00:03:41.936 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:41.936 EAL: Mem event callback 'spdk:(nil)' registered 00:03:42.198 00:03:42.198 00:03:42.198 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.198 http://cunit.sourceforge.net/ 00:03:42.198 00:03:42.198 00:03:42.198 Suite: components_suite 00:03:42.198 Test: vtophys_malloc_test ...passed 00:03:42.198 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 4MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 4MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 6MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 6MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 10MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 10MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 18MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 18MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 34MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 34MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 66MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 66MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 130MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 130MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.198 EAL: Restoring previous memory policy: 4 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was expanded by 258MB 00:03:42.198 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.198 EAL: request: mp_malloc_sync 00:03:42.198 EAL: No shared files mode enabled, IPC is disabled 00:03:42.198 EAL: Heap on socket 0 was shrunk by 258MB 00:03:42.198 EAL: Trying to obtain current memory policy. 00:03:42.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.459 EAL: Restoring previous memory policy: 4 00:03:42.459 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.459 EAL: request: mp_malloc_sync 00:03:42.459 EAL: No shared files mode enabled, IPC is disabled 00:03:42.459 EAL: Heap on socket 0 was expanded by 514MB 00:03:42.459 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.459 EAL: request: mp_malloc_sync 00:03:42.459 EAL: No shared files mode enabled, IPC is disabled 00:03:42.459 EAL: Heap on socket 0 was shrunk by 514MB 00:03:42.459 EAL: Trying to obtain current memory policy. 00:03:42.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.459 EAL: Restoring previous memory policy: 4 00:03:42.459 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.459 EAL: request: mp_malloc_sync 00:03:42.459 EAL: No shared files mode enabled, IPC is disabled 00:03:42.459 EAL: Heap on socket 0 was expanded by 1026MB 00:03:42.720 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.720 EAL: request: mp_malloc_sync 00:03:42.720 EAL: No shared files mode enabled, IPC is disabled 00:03:42.720 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:42.720 passed 00:03:42.720 00:03:42.720 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.720 suites 1 1 n/a 0 0 00:03:42.720 tests 2 2 2 0 0 00:03:42.720 asserts 497 497 497 0 n/a 00:03:42.720 00:03:42.720 Elapsed time = 0.656 seconds 00:03:42.720 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.720 EAL: request: mp_malloc_sync 00:03:42.720 EAL: No shared files mode enabled, IPC is disabled 00:03:42.720 EAL: Heap on socket 0 was shrunk by 2MB 00:03:42.720 EAL: No shared files mode enabled, IPC is disabled 00:03:42.720 EAL: No shared files mode enabled, IPC is disabled 00:03:42.720 EAL: No shared files mode enabled, IPC is disabled 00:03:42.720 00:03:42.720 real 0m0.804s 00:03:42.720 user 0m0.414s 00:03:42.720 sys 0m0.349s 00:03:42.720 10:45:34 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:42.720 10:45:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:42.720 ************************************ 00:03:42.720 END TEST env_vtophys 00:03:42.720 ************************************ 00:03:42.720 10:45:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.720 10:45:34 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.720 10:45:34 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.720 10:45:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.980 ************************************ 00:03:42.980 START TEST env_pci 00:03:42.980 ************************************ 00:03:42.980 10:45:34 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.980 00:03:42.980 00:03:42.980 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.980 http://cunit.sourceforge.net/ 00:03:42.980 00:03:42.980 00:03:42.980 Suite: pci 00:03:42.980 Test: pci_hook ...[2024-11-06 10:45:34.165346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3012551 has claimed it 00:03:42.980 EAL: Cannot find device (10000:00:01.0) 00:03:42.980 EAL: Failed to attach device on primary process 00:03:42.980 passed 00:03:42.980 00:03:42.980 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.980 suites 1 1 n/a 0 0 00:03:42.980 tests 1 1 1 0 0 00:03:42.980 asserts 25 25 25 0 n/a 00:03:42.980 00:03:42.980 Elapsed time = 0.031 seconds 00:03:42.980 00:03:42.980 real 0m0.052s 00:03:42.980 user 0m0.020s 00:03:42.980 sys 0m0.032s 00:03:42.980 10:45:34 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:42.980 10:45:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:42.980 ************************************ 00:03:42.980 END TEST env_pci 00:03:42.980 ************************************ 00:03:42.980 10:45:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:42.980 10:45:34 env -- env/env.sh@15 -- # uname 00:03:42.980 10:45:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:42.980 10:45:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:42.980 10:45:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.980 10:45:34 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:42.980 10:45:34 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.980 10:45:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.980 ************************************ 00:03:42.980 START TEST env_dpdk_post_init 00:03:42.980 ************************************ 00:03:42.980 10:45:34 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.980 EAL: Detected CPU lcores: 128 00:03:42.980 EAL: Detected NUMA nodes: 2 00:03:42.980 EAL: Detected shared linkage of DPDK 00:03:42.980 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:42.980 EAL: Selected IOVA mode 'VA' 00:03:42.980 EAL: VFIO support initialized 00:03:42.980 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:43.240 EAL: Using IOMMU type 1 (Type 1) 00:03:43.240 EAL: Ignore mapping IO port bar(1) 00:03:43.240 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:43.500 EAL: Ignore mapping IO port bar(1) 00:03:43.500 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:43.760 EAL: Ignore mapping IO port bar(1) 00:03:43.760 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:44.021 EAL: Ignore mapping IO port bar(1) 00:03:44.021 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:44.021 EAL: Ignore mapping IO port bar(1) 00:03:44.282 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:44.282 EAL: Ignore mapping IO port bar(1) 00:03:44.542 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:44.542 EAL: Ignore mapping IO port bar(1) 00:03:44.802 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:44.802 EAL: Ignore mapping IO port bar(1) 00:03:44.802 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:45.063 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:45.323 EAL: Ignore mapping IO port bar(1) 00:03:45.323 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:45.583 EAL: Ignore mapping IO port bar(1) 00:03:45.583 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:45.583 EAL: Ignore mapping IO port bar(1) 00:03:45.844 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:45.844 EAL: Ignore mapping IO port bar(1) 00:03:46.104 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:46.104 EAL: Ignore mapping IO port bar(1) 00:03:46.365 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:46.365 EAL: Ignore mapping IO port bar(1) 00:03:46.365 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:46.625 EAL: Ignore mapping IO port bar(1) 00:03:46.625 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:46.904 EAL: Ignore mapping IO port bar(1) 00:03:46.904 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:46.904 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:46.904 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:47.209 Starting DPDK initialization... 00:03:47.209 Starting SPDK post initialization... 00:03:47.209 SPDK NVMe probe 00:03:47.209 Attaching to 0000:65:00.0 00:03:47.209 Attached to 0000:65:00.0 00:03:47.209 Cleaning up... 00:03:48.615 00:03:48.615 real 0m5.728s 00:03:48.615 user 0m0.111s 00:03:48.615 sys 0m0.162s 00:03:48.615 10:45:40 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.615 10:45:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.615 ************************************ 00:03:48.615 END TEST env_dpdk_post_init 00:03:48.615 ************************************ 00:03:48.876 10:45:40 env -- env/env.sh@26 -- # uname 00:03:48.876 10:45:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.876 10:45:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.876 10:45:40 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.876 10:45:40 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.876 10:45:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.876 ************************************ 00:03:48.876 START TEST env_mem_callbacks 00:03:48.876 ************************************ 00:03:48.876 10:45:40 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.876 EAL: Detected CPU lcores: 128 00:03:48.876 EAL: Detected NUMA nodes: 2 00:03:48.876 EAL: Detected shared linkage of DPDK 00:03:48.876 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.876 EAL: Selected IOVA mode 'VA' 00:03:48.876 EAL: VFIO support initialized 00:03:48.876 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.876 00:03:48.876 00:03:48.876 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.876 http://cunit.sourceforge.net/ 00:03:48.876 00:03:48.876 00:03:48.876 Suite: memory 00:03:48.876 Test: test ... 00:03:48.876 register 0x200000200000 2097152 00:03:48.876 malloc 3145728 00:03:48.876 register 0x200000400000 4194304 00:03:48.876 buf 0x200000500000 len 3145728 PASSED 00:03:48.876 malloc 64 00:03:48.877 buf 0x2000004fff40 len 64 PASSED 00:03:48.877 malloc 4194304 00:03:48.877 register 0x200000800000 6291456 00:03:48.877 buf 0x200000a00000 len 4194304 PASSED 00:03:48.877 free 0x200000500000 3145728 00:03:48.877 free 0x2000004fff40 64 00:03:48.877 unregister 0x200000400000 4194304 PASSED 00:03:48.877 free 0x200000a00000 4194304 00:03:48.877 unregister 0x200000800000 6291456 PASSED 00:03:48.877 malloc 8388608 00:03:48.877 register 0x200000400000 10485760 00:03:48.877 buf 0x200000600000 len 8388608 PASSED 00:03:48.877 free 0x200000600000 8388608 00:03:48.877 unregister 0x200000400000 10485760 PASSED 00:03:48.877 passed 00:03:48.877 00:03:48.877 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.877 suites 1 1 n/a 0 0 00:03:48.877 tests 1 1 1 0 0 00:03:48.877 asserts 15 15 15 0 n/a 00:03:48.877 00:03:48.877 Elapsed time = 0.006 seconds 00:03:48.877 00:03:48.877 real 0m0.057s 00:03:48.877 user 0m0.023s 00:03:48.877 sys 0m0.033s 00:03:48.877 10:45:40 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.877 10:45:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:48.877 ************************************ 00:03:48.877 END TEST env_mem_callbacks 00:03:48.877 ************************************ 00:03:48.877 00:03:48.877 real 0m7.452s 00:03:48.877 user 0m1.005s 00:03:48.877 sys 0m0.985s 00:03:48.877 10:45:40 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.877 10:45:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.877 ************************************ 00:03:48.877 END TEST env 00:03:48.877 ************************************ 00:03:48.877 10:45:40 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.877 10:45:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.877 10:45:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.877 10:45:40 -- common/autotest_common.sh@10 -- # set +x 00:03:48.877 ************************************ 00:03:48.877 START TEST rpc 00:03:48.877 ************************************ 00:03:48.877 10:45:40 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:49.138 * Looking for test storage... 00:03:49.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.138 10:45:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.138 10:45:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.138 10:45:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.138 10:45:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.138 10:45:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.138 10:45:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.138 10:45:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.138 10:45:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.138 10:45:40 rpc -- scripts/common.sh@345 -- # : 1 00:03:49.138 10:45:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.138 10:45:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.138 10:45:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.138 10:45:40 rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.138 10:45:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.138 10:45:40 rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.138 10:45:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.138 10:45:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.138 10:45:40 rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.138 10:45:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.138 10:45:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.138 10:45:40 rpc -- scripts/common.sh@368 -- # return 0 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:49.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.138 --rc genhtml_branch_coverage=1 00:03:49.138 --rc genhtml_function_coverage=1 00:03:49.138 --rc genhtml_legend=1 00:03:49.138 --rc geninfo_all_blocks=1 00:03:49.138 --rc geninfo_unexecuted_blocks=1 00:03:49.138 00:03:49.138 ' 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:49.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.138 --rc genhtml_branch_coverage=1 00:03:49.138 --rc genhtml_function_coverage=1 00:03:49.138 --rc genhtml_legend=1 00:03:49.138 --rc geninfo_all_blocks=1 00:03:49.138 --rc geninfo_unexecuted_blocks=1 00:03:49.138 00:03:49.138 ' 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:49.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.138 --rc genhtml_branch_coverage=1 00:03:49.138 --rc genhtml_function_coverage=1 00:03:49.138 --rc genhtml_legend=1 00:03:49.138 --rc geninfo_all_blocks=1 00:03:49.138 --rc geninfo_unexecuted_blocks=1 00:03:49.138 00:03:49.138 ' 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:49.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.138 --rc genhtml_branch_coverage=1 00:03:49.138 --rc genhtml_function_coverage=1 00:03:49.138 --rc genhtml_legend=1 00:03:49.138 --rc geninfo_all_blocks=1 00:03:49.138 --rc geninfo_unexecuted_blocks=1 00:03:49.138 00:03:49.138 ' 00:03:49.138 10:45:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3013867 00:03:49.138 10:45:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.138 10:45:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3013867 00:03:49.138 10:45:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@833 -- # '[' -z 3013867 ']' 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:49.138 10:45:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.138 [2024-11-06 10:45:40.541636] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:03:49.138 [2024-11-06 10:45:40.541721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013867 ] 00:03:49.399 [2024-11-06 10:45:40.617904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.399 [2024-11-06 10:45:40.659944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:49.399 [2024-11-06 10:45:40.659981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3013867' to capture a snapshot of events at runtime. 00:03:49.399 [2024-11-06 10:45:40.659990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:49.399 [2024-11-06 10:45:40.659996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:49.399 [2024-11-06 10:45:40.660003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3013867 for offline analysis/debug. 00:03:49.399 [2024-11-06 10:45:40.660592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.970 10:45:41 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:49.970 10:45:41 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:49.970 10:45:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.970 10:45:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.970 10:45:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:49.970 10:45:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:49.970 10:45:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:49.970 10:45:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:49.970 10:45:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.970 ************************************ 00:03:49.970 START TEST rpc_integrity 00:03:49.970 ************************************ 00:03:49.970 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:49.970 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.970 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.970 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.970 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.970 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.970 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.230 { 00:03:50.230 "name": "Malloc0", 00:03:50.230 "aliases": [ 00:03:50.230 "ac78460b-630c-4596-81ce-bebe2911dcb8" 00:03:50.230 ], 00:03:50.230 "product_name": "Malloc disk", 00:03:50.230 "block_size": 512, 00:03:50.230 "num_blocks": 16384, 00:03:50.230 "uuid": "ac78460b-630c-4596-81ce-bebe2911dcb8", 00:03:50.230 "assigned_rate_limits": { 00:03:50.230 "rw_ios_per_sec": 0, 00:03:50.230 "rw_mbytes_per_sec": 0, 00:03:50.230 "r_mbytes_per_sec": 0, 00:03:50.230 "w_mbytes_per_sec": 0 00:03:50.230 }, 00:03:50.230 "claimed": false, 00:03:50.230 "zoned": false, 00:03:50.230 "supported_io_types": { 00:03:50.230 "read": true, 00:03:50.230 "write": true, 00:03:50.230 "unmap": true, 00:03:50.230 "flush": true, 00:03:50.230 "reset": true, 00:03:50.230 "nvme_admin": false, 00:03:50.230 "nvme_io": false, 00:03:50.230 "nvme_io_md": false, 00:03:50.230 "write_zeroes": true, 00:03:50.230 "zcopy": true, 00:03:50.230 "get_zone_info": false, 00:03:50.230 "zone_management": false, 00:03:50.230 "zone_append": false, 00:03:50.230 "compare": false, 00:03:50.230 "compare_and_write": false, 00:03:50.230 "abort": true, 00:03:50.230 "seek_hole": false, 00:03:50.230 "seek_data": false, 00:03:50.230 "copy": true, 00:03:50.230 "nvme_iov_md": false 00:03:50.230 }, 00:03:50.230 "memory_domains": [ 00:03:50.230 { 00:03:50.230 "dma_device_id": "system", 00:03:50.230 "dma_device_type": 1 00:03:50.230 }, 00:03:50.230 { 00:03:50.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.230 "dma_device_type": 2 00:03:50.230 } 00:03:50.230 ], 00:03:50.230 "driver_specific": {} 00:03:50.230 } 00:03:50.230 ]' 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 [2024-11-06 10:45:41.481482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:50.230 [2024-11-06 10:45:41.481514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.230 [2024-11-06 10:45:41.481527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x233cda0 00:03:50.230 [2024-11-06 10:45:41.481534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.230 [2024-11-06 10:45:41.482889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.230 [2024-11-06 10:45:41.482911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.230 Passthru0 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.230 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.230 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.230 { 00:03:50.230 "name": "Malloc0", 00:03:50.230 "aliases": [ 00:03:50.230 "ac78460b-630c-4596-81ce-bebe2911dcb8" 00:03:50.230 ], 00:03:50.230 "product_name": "Malloc disk", 00:03:50.230 "block_size": 512, 00:03:50.230 "num_blocks": 16384, 00:03:50.230 "uuid": "ac78460b-630c-4596-81ce-bebe2911dcb8", 00:03:50.230 "assigned_rate_limits": { 00:03:50.230 "rw_ios_per_sec": 0, 00:03:50.230 "rw_mbytes_per_sec": 0, 00:03:50.230 "r_mbytes_per_sec": 0, 00:03:50.230 "w_mbytes_per_sec": 0 00:03:50.230 }, 00:03:50.230 "claimed": true, 00:03:50.230 "claim_type": "exclusive_write", 00:03:50.230 "zoned": false, 00:03:50.230 "supported_io_types": { 00:03:50.230 "read": true, 00:03:50.230 "write": true, 00:03:50.231 "unmap": true, 00:03:50.231 "flush": true, 00:03:50.231 "reset": true, 00:03:50.231 "nvme_admin": false, 00:03:50.231 "nvme_io": false, 00:03:50.231 "nvme_io_md": false, 00:03:50.231 "write_zeroes": true, 00:03:50.231 "zcopy": true, 00:03:50.231 "get_zone_info": false, 00:03:50.231 "zone_management": false, 00:03:50.231 "zone_append": false, 00:03:50.231 "compare": false, 00:03:50.231 "compare_and_write": false, 00:03:50.231 "abort": true, 00:03:50.231 "seek_hole": false, 00:03:50.231 "seek_data": false, 00:03:50.231 "copy": true, 00:03:50.231 "nvme_iov_md": false 00:03:50.231 }, 00:03:50.231 "memory_domains": [ 00:03:50.231 { 00:03:50.231 "dma_device_id": "system", 00:03:50.231 "dma_device_type": 1 00:03:50.231 }, 00:03:50.231 { 00:03:50.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.231 "dma_device_type": 2 00:03:50.231 } 00:03:50.231 ], 00:03:50.231 "driver_specific": {} 00:03:50.231 }, 00:03:50.231 { 00:03:50.231 "name": "Passthru0", 00:03:50.231 "aliases": [ 00:03:50.231 "15ac03ed-5356-5642-affe-d6b0e8fc60d9" 00:03:50.231 ], 00:03:50.231 "product_name": "passthru", 00:03:50.231 "block_size": 512, 00:03:50.231 "num_blocks": 16384, 00:03:50.231 "uuid": "15ac03ed-5356-5642-affe-d6b0e8fc60d9", 00:03:50.231 "assigned_rate_limits": { 00:03:50.231 "rw_ios_per_sec": 0, 00:03:50.231 "rw_mbytes_per_sec": 0, 00:03:50.231 "r_mbytes_per_sec": 0, 00:03:50.231 "w_mbytes_per_sec": 0 00:03:50.231 }, 00:03:50.231 "claimed": false, 00:03:50.231 "zoned": false, 00:03:50.231 "supported_io_types": { 00:03:50.231 "read": true, 00:03:50.231 "write": true, 00:03:50.231 "unmap": true, 00:03:50.231 "flush": true, 00:03:50.231 "reset": true, 00:03:50.231 "nvme_admin": false, 00:03:50.231 "nvme_io": false, 00:03:50.231 "nvme_io_md": false, 00:03:50.231 "write_zeroes": true, 00:03:50.231 "zcopy": true, 00:03:50.231 "get_zone_info": false, 00:03:50.231 "zone_management": false, 00:03:50.231 "zone_append": false, 00:03:50.231 "compare": false, 00:03:50.231 "compare_and_write": false, 00:03:50.231 "abort": true, 00:03:50.231 "seek_hole": false, 00:03:50.231 "seek_data": false, 00:03:50.231 "copy": true, 00:03:50.231 "nvme_iov_md": false 00:03:50.231 }, 00:03:50.231 "memory_domains": [ 00:03:50.231 { 00:03:50.231 "dma_device_id": "system", 00:03:50.231 "dma_device_type": 1 00:03:50.231 }, 00:03:50.231 { 00:03:50.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.231 "dma_device_type": 2 00:03:50.231 } 00:03:50.231 ], 00:03:50.231 "driver_specific": { 00:03:50.231 "passthru": { 00:03:50.231 "name": "Passthru0", 00:03:50.231 "base_bdev_name": "Malloc0" 00:03:50.231 } 00:03:50.231 } 00:03:50.231 } 00:03:50.231 ]' 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:50.231 10:45:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.231 00:03:50.231 real 0m0.285s 00:03:50.231 user 0m0.180s 00:03:50.231 sys 0m0.045s 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.231 10:45:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.231 ************************************ 00:03:50.231 END TEST rpc_integrity 00:03:50.231 ************************************ 00:03:50.491 10:45:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:50.491 10:45:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.492 10:45:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.492 10:45:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.492 ************************************ 00:03:50.492 START TEST rpc_plugins 00:03:50.492 ************************************ 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:50.492 { 00:03:50.492 "name": "Malloc1", 00:03:50.492 "aliases": [ 00:03:50.492 "4a6e8f0e-ceb6-4ec6-8036-c436474c9307" 00:03:50.492 ], 00:03:50.492 "product_name": "Malloc disk", 00:03:50.492 "block_size": 4096, 00:03:50.492 "num_blocks": 256, 00:03:50.492 "uuid": "4a6e8f0e-ceb6-4ec6-8036-c436474c9307", 00:03:50.492 "assigned_rate_limits": { 00:03:50.492 "rw_ios_per_sec": 0, 00:03:50.492 "rw_mbytes_per_sec": 0, 00:03:50.492 "r_mbytes_per_sec": 0, 00:03:50.492 "w_mbytes_per_sec": 0 00:03:50.492 }, 00:03:50.492 "claimed": false, 00:03:50.492 "zoned": false, 00:03:50.492 "supported_io_types": { 00:03:50.492 "read": true, 00:03:50.492 "write": true, 00:03:50.492 "unmap": true, 00:03:50.492 "flush": true, 00:03:50.492 "reset": true, 00:03:50.492 "nvme_admin": false, 00:03:50.492 "nvme_io": false, 00:03:50.492 "nvme_io_md": false, 00:03:50.492 "write_zeroes": true, 00:03:50.492 "zcopy": true, 00:03:50.492 "get_zone_info": false, 00:03:50.492 "zone_management": false, 00:03:50.492 "zone_append": false, 00:03:50.492 "compare": false, 00:03:50.492 "compare_and_write": false, 00:03:50.492 "abort": true, 00:03:50.492 "seek_hole": false, 00:03:50.492 "seek_data": false, 00:03:50.492 "copy": true, 00:03:50.492 "nvme_iov_md": false 00:03:50.492 }, 00:03:50.492 "memory_domains": [ 00:03:50.492 { 00:03:50.492 "dma_device_id": "system", 00:03:50.492 "dma_device_type": 1 00:03:50.492 }, 00:03:50.492 { 00:03:50.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.492 "dma_device_type": 2 00:03:50.492 } 00:03:50.492 ], 00:03:50.492 "driver_specific": {} 00:03:50.492 } 00:03:50.492 ]' 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:50.492 10:45:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:50.492 00:03:50.492 real 0m0.153s 00:03:50.492 user 0m0.092s 00:03:50.492 sys 0m0.025s 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.492 10:45:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.492 ************************************ 00:03:50.492 END TEST rpc_plugins 00:03:50.492 ************************************ 00:03:50.492 10:45:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:50.492 10:45:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.492 10:45:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.492 10:45:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.752 ************************************ 00:03:50.752 START TEST rpc_trace_cmd_test 00:03:50.752 ************************************ 00:03:50.752 10:45:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:50.752 10:45:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:50.752 10:45:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:50.752 10:45:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.752 10:45:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:50.752 10:45:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.752 10:45:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:50.752 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3013867", 00:03:50.752 "tpoint_group_mask": "0x8", 00:03:50.752 "iscsi_conn": { 00:03:50.752 "mask": "0x2", 00:03:50.752 "tpoint_mask": "0x0" 00:03:50.752 }, 00:03:50.752 "scsi": { 00:03:50.752 "mask": "0x4", 00:03:50.752 "tpoint_mask": "0x0" 00:03:50.752 }, 00:03:50.752 "bdev": { 00:03:50.752 "mask": "0x8", 00:03:50.752 "tpoint_mask": "0xffffffffffffffff" 00:03:50.752 }, 00:03:50.752 "nvmf_rdma": { 00:03:50.752 "mask": "0x10", 00:03:50.752 "tpoint_mask": "0x0" 00:03:50.752 }, 00:03:50.752 "nvmf_tcp": { 00:03:50.752 "mask": "0x20", 00:03:50.752 "tpoint_mask": "0x0" 00:03:50.752 }, 00:03:50.752 "ftl": { 00:03:50.752 "mask": "0x40", 00:03:50.752 "tpoint_mask": "0x0" 00:03:50.752 }, 00:03:50.752 "blobfs": { 00:03:50.752 "mask": "0x80", 00:03:50.752 "tpoint_mask": "0x0" 00:03:50.752 }, 00:03:50.752 "dsa": { 00:03:50.752 "mask": "0x200", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "thread": { 00:03:50.753 "mask": "0x400", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "nvme_pcie": { 00:03:50.753 "mask": "0x800", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "iaa": { 00:03:50.753 "mask": "0x1000", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "nvme_tcp": { 00:03:50.753 "mask": "0x2000", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "bdev_nvme": { 00:03:50.753 "mask": "0x4000", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "sock": { 00:03:50.753 "mask": "0x8000", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "blob": { 00:03:50.753 "mask": "0x10000", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "bdev_raid": { 00:03:50.753 "mask": "0x20000", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 }, 00:03:50.753 "scheduler": { 00:03:50.753 "mask": "0x40000", 00:03:50.753 "tpoint_mask": "0x0" 00:03:50.753 } 00:03:50.753 }' 00:03:50.753 10:45:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:50.753 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.013 10:45:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.013 00:03:51.013 real 0m0.252s 00:03:51.013 user 0m0.210s 00:03:51.013 sys 0m0.032s 00:03:51.013 10:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.013 10:45:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.013 ************************************ 00:03:51.013 END TEST rpc_trace_cmd_test 00:03:51.013 ************************************ 00:03:51.014 10:45:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.014 10:45:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.014 10:45:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.014 10:45:42 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.014 10:45:42 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.014 10:45:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.014 ************************************ 00:03:51.014 START TEST rpc_daemon_integrity 00:03:51.014 ************************************ 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.014 { 00:03:51.014 "name": "Malloc2", 00:03:51.014 "aliases": [ 00:03:51.014 "c5289662-2f66-4dec-815e-0851af8fd230" 00:03:51.014 ], 00:03:51.014 "product_name": "Malloc disk", 00:03:51.014 "block_size": 512, 00:03:51.014 "num_blocks": 16384, 00:03:51.014 "uuid": "c5289662-2f66-4dec-815e-0851af8fd230", 00:03:51.014 "assigned_rate_limits": { 00:03:51.014 "rw_ios_per_sec": 0, 00:03:51.014 "rw_mbytes_per_sec": 0, 00:03:51.014 "r_mbytes_per_sec": 0, 00:03:51.014 "w_mbytes_per_sec": 0 00:03:51.014 }, 00:03:51.014 "claimed": false, 00:03:51.014 "zoned": false, 00:03:51.014 "supported_io_types": { 00:03:51.014 "read": true, 00:03:51.014 "write": true, 00:03:51.014 "unmap": true, 00:03:51.014 "flush": true, 00:03:51.014 "reset": true, 00:03:51.014 "nvme_admin": false, 00:03:51.014 "nvme_io": false, 00:03:51.014 "nvme_io_md": false, 00:03:51.014 "write_zeroes": true, 00:03:51.014 "zcopy": true, 00:03:51.014 "get_zone_info": false, 00:03:51.014 "zone_management": false, 00:03:51.014 "zone_append": false, 00:03:51.014 "compare": false, 00:03:51.014 "compare_and_write": false, 00:03:51.014 "abort": true, 00:03:51.014 "seek_hole": false, 00:03:51.014 "seek_data": false, 00:03:51.014 "copy": true, 00:03:51.014 "nvme_iov_md": false 00:03:51.014 }, 00:03:51.014 "memory_domains": [ 00:03:51.014 { 00:03:51.014 "dma_device_id": "system", 00:03:51.014 "dma_device_type": 1 00:03:51.014 }, 00:03:51.014 { 00:03:51.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.014 "dma_device_type": 2 00:03:51.014 } 00:03:51.014 ], 00:03:51.014 "driver_specific": {} 00:03:51.014 } 00:03:51.014 ]' 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.014 [2024-11-06 10:45:42.420046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:51.014 [2024-11-06 10:45:42.420076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.014 [2024-11-06 10:45:42.420090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246e090 00:03:51.014 [2024-11-06 10:45:42.420098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.014 [2024-11-06 10:45:42.421402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.014 [2024-11-06 10:45:42.421422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.014 Passthru0 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.014 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.274 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.274 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.274 { 00:03:51.274 "name": "Malloc2", 00:03:51.274 "aliases": [ 00:03:51.274 "c5289662-2f66-4dec-815e-0851af8fd230" 00:03:51.274 ], 00:03:51.274 "product_name": "Malloc disk", 00:03:51.274 "block_size": 512, 00:03:51.274 "num_blocks": 16384, 00:03:51.274 "uuid": "c5289662-2f66-4dec-815e-0851af8fd230", 00:03:51.274 "assigned_rate_limits": { 00:03:51.274 "rw_ios_per_sec": 0, 00:03:51.274 "rw_mbytes_per_sec": 0, 00:03:51.274 "r_mbytes_per_sec": 0, 00:03:51.274 "w_mbytes_per_sec": 0 00:03:51.274 }, 00:03:51.274 "claimed": true, 00:03:51.274 "claim_type": "exclusive_write", 00:03:51.274 "zoned": false, 00:03:51.274 "supported_io_types": { 00:03:51.274 "read": true, 00:03:51.274 "write": true, 00:03:51.274 "unmap": true, 00:03:51.274 "flush": true, 00:03:51.274 "reset": true, 00:03:51.274 "nvme_admin": false, 00:03:51.274 "nvme_io": false, 00:03:51.274 "nvme_io_md": false, 00:03:51.274 "write_zeroes": true, 00:03:51.274 "zcopy": true, 00:03:51.274 "get_zone_info": false, 00:03:51.274 "zone_management": false, 00:03:51.274 "zone_append": false, 00:03:51.274 "compare": false, 00:03:51.274 "compare_and_write": false, 00:03:51.274 "abort": true, 00:03:51.274 "seek_hole": false, 00:03:51.274 "seek_data": false, 00:03:51.274 "copy": true, 00:03:51.274 "nvme_iov_md": false 00:03:51.274 }, 00:03:51.274 "memory_domains": [ 00:03:51.274 { 00:03:51.274 "dma_device_id": "system", 00:03:51.274 "dma_device_type": 1 00:03:51.274 }, 00:03:51.274 { 00:03:51.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.274 "dma_device_type": 2 00:03:51.274 } 00:03:51.274 ], 00:03:51.274 "driver_specific": {} 00:03:51.274 }, 00:03:51.274 { 00:03:51.274 "name": "Passthru0", 00:03:51.275 "aliases": [ 00:03:51.275 "659de41c-9391-5157-82b9-32cfa0f2cd5d" 00:03:51.275 ], 00:03:51.275 "product_name": "passthru", 00:03:51.275 "block_size": 512, 00:03:51.275 "num_blocks": 16384, 00:03:51.275 "uuid": "659de41c-9391-5157-82b9-32cfa0f2cd5d", 00:03:51.275 "assigned_rate_limits": { 00:03:51.275 "rw_ios_per_sec": 0, 00:03:51.275 "rw_mbytes_per_sec": 0, 00:03:51.275 "r_mbytes_per_sec": 0, 00:03:51.275 "w_mbytes_per_sec": 0 00:03:51.275 }, 00:03:51.275 "claimed": false, 00:03:51.275 "zoned": false, 00:03:51.275 "supported_io_types": { 00:03:51.275 "read": true, 00:03:51.275 "write": true, 00:03:51.275 "unmap": true, 00:03:51.275 "flush": true, 00:03:51.275 "reset": true, 00:03:51.275 "nvme_admin": false, 00:03:51.275 "nvme_io": false, 00:03:51.275 "nvme_io_md": false, 00:03:51.275 "write_zeroes": true, 00:03:51.275 "zcopy": true, 00:03:51.275 "get_zone_info": false, 00:03:51.275 "zone_management": false, 00:03:51.275 "zone_append": false, 00:03:51.275 "compare": false, 00:03:51.275 "compare_and_write": false, 00:03:51.275 "abort": true, 00:03:51.275 "seek_hole": false, 00:03:51.275 "seek_data": false, 00:03:51.275 "copy": true, 00:03:51.275 "nvme_iov_md": false 00:03:51.275 }, 00:03:51.275 "memory_domains": [ 00:03:51.275 { 00:03:51.275 "dma_device_id": "system", 00:03:51.275 "dma_device_type": 1 00:03:51.275 }, 00:03:51.275 { 00:03:51.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.275 "dma_device_type": 2 00:03:51.275 } 00:03:51.275 ], 00:03:51.275 "driver_specific": { 00:03:51.275 "passthru": { 00:03:51.275 "name": "Passthru0", 00:03:51.275 "base_bdev_name": "Malloc2" 00:03:51.275 } 00:03:51.275 } 00:03:51.275 } 00:03:51.275 ]' 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.275 00:03:51.275 real 0m0.306s 00:03:51.275 user 0m0.188s 00:03:51.275 sys 0m0.044s 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.275 10:45:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.275 ************************************ 00:03:51.275 END TEST rpc_daemon_integrity 00:03:51.275 ************************************ 00:03:51.275 10:45:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:51.275 10:45:42 rpc -- rpc/rpc.sh@84 -- # killprocess 3013867 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@952 -- # '[' -z 3013867 ']' 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@956 -- # kill -0 3013867 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@957 -- # uname 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3013867 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3013867' 00:03:51.275 killing process with pid 3013867 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@971 -- # kill 3013867 00:03:51.275 10:45:42 rpc -- common/autotest_common.sh@976 -- # wait 3013867 00:03:51.535 00:03:51.535 real 0m2.606s 00:03:51.535 user 0m3.393s 00:03:51.535 sys 0m0.731s 00:03:51.535 10:45:42 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.535 10:45:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.535 ************************************ 00:03:51.535 END TEST rpc 00:03:51.535 ************************************ 00:03:51.535 10:45:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.535 10:45:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.535 10:45:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.535 10:45:42 -- common/autotest_common.sh@10 -- # set +x 00:03:51.796 ************************************ 00:03:51.796 START TEST skip_rpc 00:03:51.796 ************************************ 00:03:51.796 10:45:42 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.796 * Looking for test storage... 00:03:51.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.796 10:45:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:51.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.796 --rc genhtml_branch_coverage=1 00:03:51.796 --rc genhtml_function_coverage=1 00:03:51.796 --rc genhtml_legend=1 00:03:51.796 --rc geninfo_all_blocks=1 00:03:51.796 --rc geninfo_unexecuted_blocks=1 00:03:51.796 00:03:51.796 ' 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:51.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.796 --rc genhtml_branch_coverage=1 00:03:51.796 --rc genhtml_function_coverage=1 00:03:51.796 --rc genhtml_legend=1 00:03:51.796 --rc geninfo_all_blocks=1 00:03:51.796 --rc geninfo_unexecuted_blocks=1 00:03:51.796 00:03:51.796 ' 00:03:51.796 10:45:43 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:51.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.797 --rc genhtml_branch_coverage=1 00:03:51.797 --rc genhtml_function_coverage=1 00:03:51.797 --rc genhtml_legend=1 00:03:51.797 --rc geninfo_all_blocks=1 00:03:51.797 --rc geninfo_unexecuted_blocks=1 00:03:51.797 00:03:51.797 ' 00:03:51.797 10:45:43 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:51.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.797 --rc genhtml_branch_coverage=1 00:03:51.797 --rc genhtml_function_coverage=1 00:03:51.797 --rc genhtml_legend=1 00:03:51.797 --rc geninfo_all_blocks=1 00:03:51.797 --rc geninfo_unexecuted_blocks=1 00:03:51.797 00:03:51.797 ' 00:03:51.797 10:45:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:51.797 10:45:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.797 10:45:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:51.797 10:45:43 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.797 10:45:43 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.797 10:45:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.797 ************************************ 00:03:51.797 START TEST skip_rpc 00:03:51.797 ************************************ 00:03:51.797 10:45:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:51.797 10:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3014549 00:03:51.797 10:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.797 10:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:51.797 10:45:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.057 [2024-11-06 10:45:43.233115] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:03:52.057 [2024-11-06 10:45:43.233177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014549 ] 00:03:52.057 [2024-11-06 10:45:43.307541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.057 [2024-11-06 10:45:43.350124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3014549 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3014549 ']' 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3014549 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3014549 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3014549' 00:03:57.342 killing process with pid 3014549 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3014549 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3014549 00:03:57.342 00:03:57.342 real 0m5.276s 00:03:57.342 user 0m5.078s 00:03:57.342 sys 0m0.240s 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.342 10:45:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.342 ************************************ 00:03:57.342 END TEST skip_rpc 00:03:57.342 ************************************ 00:03:57.342 10:45:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:57.342 10:45:48 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.343 10:45:48 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.343 10:45:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.343 ************************************ 00:03:57.343 START TEST skip_rpc_with_json 00:03:57.343 ************************************ 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3015613 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3015613 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3015613 ']' 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.343 10:45:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:57.343 [2024-11-06 10:45:48.587515] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:03:57.343 [2024-11-06 10:45:48.587571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015613 ] 00:03:57.343 [2024-11-06 10:45:48.660502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.343 [2024-11-06 10:45:48.700828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.284 [2024-11-06 10:45:49.358533] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:58.284 request: 00:03:58.284 { 00:03:58.284 "trtype": "tcp", 00:03:58.284 "method": "nvmf_get_transports", 00:03:58.284 "req_id": 1 00:03:58.284 } 00:03:58.284 Got JSON-RPC error response 00:03:58.284 response: 00:03:58.284 { 00:03:58.284 "code": -19, 00:03:58.284 "message": "No such device" 00:03:58.284 } 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.284 [2024-11-06 10:45:49.366655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.284 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.284 { 00:03:58.284 "subsystems": [ 00:03:58.284 { 00:03:58.284 "subsystem": "fsdev", 00:03:58.284 "config": [ 00:03:58.284 { 00:03:58.284 "method": "fsdev_set_opts", 00:03:58.284 "params": { 00:03:58.284 "fsdev_io_pool_size": 65535, 00:03:58.284 "fsdev_io_cache_size": 256 00:03:58.284 } 00:03:58.284 } 00:03:58.284 ] 00:03:58.284 }, 00:03:58.284 { 00:03:58.284 "subsystem": "vfio_user_target", 00:03:58.284 "config": null 00:03:58.284 }, 00:03:58.284 { 00:03:58.284 "subsystem": "keyring", 00:03:58.284 "config": [] 00:03:58.284 }, 00:03:58.284 { 00:03:58.284 "subsystem": "iobuf", 00:03:58.284 "config": [ 00:03:58.284 { 00:03:58.284 "method": "iobuf_set_options", 00:03:58.284 "params": { 00:03:58.284 "small_pool_count": 8192, 00:03:58.284 "large_pool_count": 1024, 00:03:58.284 "small_bufsize": 8192, 00:03:58.284 "large_bufsize": 135168, 00:03:58.284 "enable_numa": false 00:03:58.284 } 00:03:58.284 } 00:03:58.284 ] 00:03:58.284 }, 00:03:58.284 { 00:03:58.284 "subsystem": "sock", 00:03:58.284 "config": [ 00:03:58.284 { 00:03:58.284 "method": "sock_set_default_impl", 00:03:58.284 "params": { 00:03:58.284 "impl_name": "posix" 00:03:58.284 } 00:03:58.284 }, 00:03:58.284 { 00:03:58.284 "method": "sock_impl_set_options", 00:03:58.284 "params": { 00:03:58.284 "impl_name": "ssl", 00:03:58.284 "recv_buf_size": 4096, 00:03:58.284 "send_buf_size": 4096, 00:03:58.284 "enable_recv_pipe": true, 00:03:58.284 "enable_quickack": false, 00:03:58.284 "enable_placement_id": 0, 00:03:58.284 "enable_zerocopy_send_server": true, 00:03:58.284 "enable_zerocopy_send_client": false, 00:03:58.284 "zerocopy_threshold": 0, 00:03:58.284 "tls_version": 0, 00:03:58.284 "enable_ktls": false 00:03:58.284 } 00:03:58.284 }, 00:03:58.284 { 00:03:58.284 "method": "sock_impl_set_options", 00:03:58.284 "params": { 00:03:58.284 "impl_name": "posix", 00:03:58.284 "recv_buf_size": 2097152, 00:03:58.284 "send_buf_size": 2097152, 00:03:58.284 "enable_recv_pipe": true, 00:03:58.284 "enable_quickack": false, 00:03:58.284 "enable_placement_id": 0, 00:03:58.284 "enable_zerocopy_send_server": true, 00:03:58.284 "enable_zerocopy_send_client": false, 00:03:58.284 "zerocopy_threshold": 0, 00:03:58.284 "tls_version": 0, 00:03:58.284 "enable_ktls": false 00:03:58.284 } 00:03:58.284 } 00:03:58.284 ] 00:03:58.284 }, 00:03:58.285 { 00:03:58.285 "subsystem": "vmd", 00:03:58.285 "config": [] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "accel", 00:03:58.285 "config": [ 00:03:58.285 { 00:03:58.285 "method": "accel_set_options", 00:03:58.285 "params": { 00:03:58.285 "small_cache_size": 128, 00:03:58.285 "large_cache_size": 16, 00:03:58.285 "task_count": 2048, 00:03:58.285 "sequence_count": 2048, 00:03:58.285 "buf_count": 2048 00:03:58.285 } 00:03:58.285 } 00:03:58.285 ] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "bdev", 00:03:58.285 "config": [ 00:03:58.285 { 00:03:58.285 "method": "bdev_set_options", 00:03:58.285 "params": { 00:03:58.285 "bdev_io_pool_size": 65535, 00:03:58.285 "bdev_io_cache_size": 256, 00:03:58.285 "bdev_auto_examine": true, 00:03:58.285 "iobuf_small_cache_size": 128, 00:03:58.285 "iobuf_large_cache_size": 16 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "bdev_raid_set_options", 00:03:58.285 "params": { 00:03:58.285 "process_window_size_kb": 1024, 00:03:58.285 "process_max_bandwidth_mb_sec": 0 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "bdev_iscsi_set_options", 00:03:58.285 "params": { 00:03:58.285 "timeout_sec": 30 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "bdev_nvme_set_options", 00:03:58.285 "params": { 00:03:58.285 "action_on_timeout": "none", 00:03:58.285 "timeout_us": 0, 00:03:58.285 "timeout_admin_us": 0, 00:03:58.285 "keep_alive_timeout_ms": 10000, 00:03:58.285 "arbitration_burst": 0, 00:03:58.285 "low_priority_weight": 0, 00:03:58.285 "medium_priority_weight": 0, 00:03:58.285 "high_priority_weight": 0, 00:03:58.285 "nvme_adminq_poll_period_us": 10000, 00:03:58.285 "nvme_ioq_poll_period_us": 0, 00:03:58.285 "io_queue_requests": 0, 00:03:58.285 "delay_cmd_submit": true, 00:03:58.285 "transport_retry_count": 4, 00:03:58.285 "bdev_retry_count": 3, 00:03:58.285 "transport_ack_timeout": 0, 00:03:58.285 "ctrlr_loss_timeout_sec": 0, 00:03:58.285 "reconnect_delay_sec": 0, 00:03:58.285 "fast_io_fail_timeout_sec": 0, 00:03:58.285 "disable_auto_failback": false, 00:03:58.285 "generate_uuids": false, 00:03:58.285 "transport_tos": 0, 00:03:58.285 "nvme_error_stat": false, 00:03:58.285 "rdma_srq_size": 0, 00:03:58.285 "io_path_stat": false, 00:03:58.285 "allow_accel_sequence": false, 00:03:58.285 "rdma_max_cq_size": 0, 00:03:58.285 "rdma_cm_event_timeout_ms": 0, 00:03:58.285 "dhchap_digests": [ 00:03:58.285 "sha256", 00:03:58.285 "sha384", 00:03:58.285 "sha512" 00:03:58.285 ], 00:03:58.285 "dhchap_dhgroups": [ 00:03:58.285 "null", 00:03:58.285 "ffdhe2048", 00:03:58.285 "ffdhe3072", 00:03:58.285 "ffdhe4096", 00:03:58.285 "ffdhe6144", 00:03:58.285 "ffdhe8192" 00:03:58.285 ] 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "bdev_nvme_set_hotplug", 00:03:58.285 "params": { 00:03:58.285 "period_us": 100000, 00:03:58.285 "enable": false 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "bdev_wait_for_examine" 00:03:58.285 } 00:03:58.285 ] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "scsi", 00:03:58.285 "config": null 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "scheduler", 00:03:58.285 "config": [ 00:03:58.285 { 00:03:58.285 "method": "framework_set_scheduler", 00:03:58.285 "params": { 00:03:58.285 "name": "static" 00:03:58.285 } 00:03:58.285 } 00:03:58.285 ] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "vhost_scsi", 00:03:58.285 "config": [] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "vhost_blk", 00:03:58.285 "config": [] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "ublk", 00:03:58.285 "config": [] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "nbd", 00:03:58.285 "config": [] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "nvmf", 00:03:58.285 "config": [ 00:03:58.285 { 00:03:58.285 "method": "nvmf_set_config", 00:03:58.285 "params": { 00:03:58.285 "discovery_filter": "match_any", 00:03:58.285 "admin_cmd_passthru": { 00:03:58.285 "identify_ctrlr": false 00:03:58.285 }, 00:03:58.285 "dhchap_digests": [ 00:03:58.285 "sha256", 00:03:58.285 "sha384", 00:03:58.285 "sha512" 00:03:58.285 ], 00:03:58.285 "dhchap_dhgroups": [ 00:03:58.285 "null", 00:03:58.285 "ffdhe2048", 00:03:58.285 "ffdhe3072", 00:03:58.285 "ffdhe4096", 00:03:58.285 "ffdhe6144", 00:03:58.285 "ffdhe8192" 00:03:58.285 ] 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "nvmf_set_max_subsystems", 00:03:58.285 "params": { 00:03:58.285 "max_subsystems": 1024 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "nvmf_set_crdt", 00:03:58.285 "params": { 00:03:58.285 "crdt1": 0, 00:03:58.285 "crdt2": 0, 00:03:58.285 "crdt3": 0 00:03:58.285 } 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "method": "nvmf_create_transport", 00:03:58.285 "params": { 00:03:58.285 "trtype": "TCP", 00:03:58.285 "max_queue_depth": 128, 00:03:58.285 "max_io_qpairs_per_ctrlr": 127, 00:03:58.285 "in_capsule_data_size": 4096, 00:03:58.285 "max_io_size": 131072, 00:03:58.285 "io_unit_size": 131072, 00:03:58.285 "max_aq_depth": 128, 00:03:58.285 "num_shared_buffers": 511, 00:03:58.285 "buf_cache_size": 4294967295, 00:03:58.285 "dif_insert_or_strip": false, 00:03:58.285 "zcopy": false, 00:03:58.285 "c2h_success": true, 00:03:58.285 "sock_priority": 0, 00:03:58.285 "abort_timeout_sec": 1, 00:03:58.285 "ack_timeout": 0, 00:03:58.285 "data_wr_pool_size": 0 00:03:58.285 } 00:03:58.285 } 00:03:58.285 ] 00:03:58.285 }, 00:03:58.285 { 00:03:58.285 "subsystem": "iscsi", 00:03:58.285 "config": [ 00:03:58.285 { 00:03:58.285 "method": "iscsi_set_options", 00:03:58.285 "params": { 00:03:58.285 "node_base": "iqn.2016-06.io.spdk", 00:03:58.285 "max_sessions": 128, 00:03:58.285 "max_connections_per_session": 2, 00:03:58.285 "max_queue_depth": 64, 00:03:58.285 "default_time2wait": 2, 00:03:58.285 "default_time2retain": 20, 00:03:58.285 "first_burst_length": 8192, 00:03:58.285 "immediate_data": true, 00:03:58.285 "allow_duplicated_isid": false, 00:03:58.285 "error_recovery_level": 0, 00:03:58.285 "nop_timeout": 60, 00:03:58.285 "nop_in_interval": 30, 00:03:58.285 "disable_chap": false, 00:03:58.285 "require_chap": false, 00:03:58.285 "mutual_chap": false, 00:03:58.285 "chap_group": 0, 00:03:58.285 "max_large_datain_per_connection": 64, 00:03:58.285 "max_r2t_per_connection": 4, 00:03:58.285 "pdu_pool_size": 36864, 00:03:58.285 "immediate_data_pool_size": 16384, 00:03:58.285 "data_out_pool_size": 2048 00:03:58.285 } 00:03:58.285 } 00:03:58.285 ] 00:03:58.285 } 00:03:58.285 ] 00:03:58.285 } 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3015613 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3015613 ']' 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3015613 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3015613 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3015613' 00:03:58.285 killing process with pid 3015613 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3015613 00:03:58.285 10:45:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3015613 00:03:58.545 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3015927 00:03:58.545 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:58.545 10:45:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.827 10:45:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3015927 00:04:03.827 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3015927 ']' 00:04:03.827 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3015927 00:04:03.827 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:03.828 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:03.828 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3015927 00:04:03.828 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:03.828 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:03.828 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3015927' 00:04:03.828 killing process with pid 3015927 00:04:03.828 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3015927 00:04:03.828 10:45:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3015927 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:03.828 00:04:03.828 real 0m6.546s 00:04:03.828 user 0m6.429s 00:04:03.828 sys 0m0.528s 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.828 ************************************ 00:04:03.828 END TEST skip_rpc_with_json 00:04:03.828 ************************************ 00:04:03.828 10:45:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:03.828 10:45:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.828 10:45:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.828 10:45:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.828 ************************************ 00:04:03.828 START TEST skip_rpc_with_delay 00:04:03.828 ************************************ 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.828 [2024-11-06 10:45:55.195075] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:03.828 00:04:03.828 real 0m0.072s 00:04:03.828 user 0m0.042s 00:04:03.828 sys 0m0.029s 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.828 10:45:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:03.828 ************************************ 00:04:03.828 END TEST skip_rpc_with_delay 00:04:03.828 ************************************ 00:04:03.828 10:45:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.089 10:45:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.089 10:45:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.089 10:45:55 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.089 10:45:55 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.089 10:45:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 ************************************ 00:04:04.089 START TEST exit_on_failed_rpc_init 00:04:04.089 ************************************ 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3017053 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3017053 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3017053 ']' 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.089 10:45:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.089 [2024-11-06 10:45:55.338650] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:04.089 [2024-11-06 10:45:55.338712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017053 ] 00:04:04.089 [2024-11-06 10:45:55.413568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.089 [2024-11-06 10:45:55.455952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:05.030 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.030 [2024-11-06 10:45:56.172629] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:05.031 [2024-11-06 10:45:56.172703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017330 ] 00:04:05.031 [2024-11-06 10:45:56.260261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.031 [2024-11-06 10:45:56.295848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.031 [2024-11-06 10:45:56.295901] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:05.031 [2024-11-06 10:45:56.295911] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:05.031 [2024-11-06 10:45:56.295918] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3017053 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3017053 ']' 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3017053 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3017053 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3017053' 00:04:05.031 killing process with pid 3017053 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3017053 00:04:05.031 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3017053 00:04:05.291 00:04:05.291 real 0m1.319s 00:04:05.291 user 0m1.530s 00:04:05.291 sys 0m0.377s 00:04:05.291 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.291 10:45:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.291 ************************************ 00:04:05.291 END TEST exit_on_failed_rpc_init 00:04:05.291 ************************************ 00:04:05.291 10:45:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.291 00:04:05.291 real 0m13.677s 00:04:05.291 user 0m13.293s 00:04:05.292 sys 0m1.449s 00:04:05.292 10:45:56 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.292 10:45:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.292 ************************************ 00:04:05.292 END TEST skip_rpc 00:04:05.292 ************************************ 00:04:05.292 10:45:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:05.292 10:45:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.292 10:45:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.292 10:45:56 -- common/autotest_common.sh@10 -- # set +x 00:04:05.553 ************************************ 00:04:05.553 START TEST rpc_client 00:04:05.553 ************************************ 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:05.553 * Looking for test storage... 00:04:05.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.553 10:45:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.553 --rc genhtml_branch_coverage=1 00:04:05.553 --rc genhtml_function_coverage=1 00:04:05.553 --rc genhtml_legend=1 00:04:05.553 --rc geninfo_all_blocks=1 00:04:05.553 --rc geninfo_unexecuted_blocks=1 00:04:05.553 00:04:05.553 ' 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.553 --rc genhtml_branch_coverage=1 00:04:05.553 --rc genhtml_function_coverage=1 00:04:05.553 --rc genhtml_legend=1 00:04:05.553 --rc geninfo_all_blocks=1 00:04:05.553 --rc geninfo_unexecuted_blocks=1 00:04:05.553 00:04:05.553 ' 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.553 --rc genhtml_branch_coverage=1 00:04:05.553 --rc genhtml_function_coverage=1 00:04:05.553 --rc genhtml_legend=1 00:04:05.553 --rc geninfo_all_blocks=1 00:04:05.553 --rc geninfo_unexecuted_blocks=1 00:04:05.553 00:04:05.553 ' 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.553 --rc genhtml_branch_coverage=1 00:04:05.553 --rc genhtml_function_coverage=1 00:04:05.553 --rc genhtml_legend=1 00:04:05.553 --rc geninfo_all_blocks=1 00:04:05.553 --rc geninfo_unexecuted_blocks=1 00:04:05.553 00:04:05.553 ' 00:04:05.553 10:45:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:05.553 OK 00:04:05.553 10:45:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:05.553 00:04:05.553 real 0m0.229s 00:04:05.553 user 0m0.134s 00:04:05.553 sys 0m0.110s 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.553 10:45:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:05.553 ************************************ 00:04:05.553 END TEST rpc_client 00:04:05.553 ************************************ 00:04:05.815 10:45:56 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:05.815 10:45:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.815 10:45:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.815 10:45:56 -- common/autotest_common.sh@10 -- # set +x 00:04:05.815 ************************************ 00:04:05.815 START TEST json_config 00:04:05.815 ************************************ 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.815 10:45:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.815 10:45:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.815 10:45:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.815 10:45:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.815 10:45:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.815 10:45:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.815 10:45:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.815 10:45:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:05.815 10:45:57 json_config -- scripts/common.sh@345 -- # : 1 00:04:05.815 10:45:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.815 10:45:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.815 10:45:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:05.815 10:45:57 json_config -- scripts/common.sh@353 -- # local d=1 00:04:05.815 10:45:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.815 10:45:57 json_config -- scripts/common.sh@355 -- # echo 1 00:04:05.815 10:45:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.815 10:45:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@353 -- # local d=2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.815 10:45:57 json_config -- scripts/common.sh@355 -- # echo 2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.815 10:45:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.815 10:45:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.815 10:45:57 json_config -- scripts/common.sh@368 -- # return 0 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.815 --rc genhtml_branch_coverage=1 00:04:05.815 --rc genhtml_function_coverage=1 00:04:05.815 --rc genhtml_legend=1 00:04:05.815 --rc geninfo_all_blocks=1 00:04:05.815 --rc geninfo_unexecuted_blocks=1 00:04:05.815 00:04:05.815 ' 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.815 --rc genhtml_branch_coverage=1 00:04:05.815 --rc genhtml_function_coverage=1 00:04:05.815 --rc genhtml_legend=1 00:04:05.815 --rc geninfo_all_blocks=1 00:04:05.815 --rc geninfo_unexecuted_blocks=1 00:04:05.815 00:04:05.815 ' 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.815 --rc genhtml_branch_coverage=1 00:04:05.815 --rc genhtml_function_coverage=1 00:04:05.815 --rc genhtml_legend=1 00:04:05.815 --rc geninfo_all_blocks=1 00:04:05.815 --rc geninfo_unexecuted_blocks=1 00:04:05.815 00:04:05.815 ' 00:04:05.815 10:45:57 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.815 --rc genhtml_branch_coverage=1 00:04:05.815 --rc genhtml_function_coverage=1 00:04:05.815 --rc genhtml_legend=1 00:04:05.815 --rc geninfo_all_blocks=1 00:04:05.815 --rc geninfo_unexecuted_blocks=1 00:04:05.815 00:04:05.815 ' 00:04:05.815 10:45:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.815 10:45:57 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.815 10:45:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.815 10:45:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.815 10:45:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.815 10:45:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.816 10:45:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.816 10:45:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.816 10:45:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.816 10:45:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:05.816 10:45:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@51 -- # : 0 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.816 10:45:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:05.816 10:45:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:06.077 INFO: JSON configuration test init 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.077 10:45:57 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:06.077 10:45:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.077 10:45:57 json_config -- json_config/common.sh@10 -- # shift 00:04:06.077 10:45:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.077 10:45:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.077 10:45:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.077 10:45:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.077 10:45:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.077 10:45:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3017656 00:04:06.077 10:45:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.077 Waiting for target to run... 00:04:06.077 10:45:57 json_config -- json_config/common.sh@25 -- # waitforlisten 3017656 /var/tmp/spdk_tgt.sock 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@833 -- # '[' -z 3017656 ']' 00:04:06.077 10:45:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:06.077 10:45:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.077 [2024-11-06 10:45:57.314410] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:06.077 [2024-11-06 10:45:57.314490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017656 ] 00:04:06.338 [2024-11-06 10:45:57.600937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.338 [2024-11-06 10:45:57.630975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.910 10:45:58 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:06.910 10:45:58 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:06.910 10:45:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:06.910 00:04:06.910 10:45:58 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:06.910 10:45:58 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:06.910 10:45:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.910 10:45:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.910 10:45:58 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:06.910 10:45:58 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:06.910 10:45:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.910 10:45:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.910 10:45:58 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:06.910 10:45:58 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:06.910 10:45:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:07.482 10:45:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.482 10:45:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:07.482 10:45:58 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:07.482 10:45:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@54 -- # sort 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:07.743 10:45:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:07.743 10:45:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:07.743 10:45:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.743 10:45:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:07.743 10:45:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.743 10:45:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.743 MallocForNvmf0 00:04:07.743 10:45:59 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.743 10:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:08.003 MallocForNvmf1 00:04:08.003 10:45:59 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:08.003 10:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:08.264 [2024-11-06 10:45:59.478573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.264 10:45:59 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.264 10:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.524 10:45:59 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.524 10:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.524 10:45:59 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.524 10:45:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.784 10:46:00 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.784 10:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.784 [2024-11-06 10:46:00.164803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.784 10:46:00 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:08.784 10:46:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.784 10:46:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.043 10:46:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:09.043 10:46:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.043 10:46:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.043 10:46:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:09.043 10:46:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:09.043 10:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:09.043 MallocBdevForConfigChangeCheck 00:04:09.043 10:46:00 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:09.043 10:46:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.043 10:46:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.303 10:46:00 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:09.303 10:46:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.563 10:46:00 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:09.563 INFO: shutting down applications... 00:04:09.564 10:46:00 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:09.564 10:46:00 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:09.564 10:46:00 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:09.564 10:46:00 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:09.824 Calling clear_iscsi_subsystem 00:04:09.824 Calling clear_nvmf_subsystem 00:04:09.824 Calling clear_nbd_subsystem 00:04:09.824 Calling clear_ublk_subsystem 00:04:09.824 Calling clear_vhost_blk_subsystem 00:04:09.824 Calling clear_vhost_scsi_subsystem 00:04:09.824 Calling clear_bdev_subsystem 00:04:10.084 10:46:01 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:10.084 10:46:01 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:10.084 10:46:01 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:10.084 10:46:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.084 10:46:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:10.084 10:46:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:10.344 10:46:01 json_config -- json_config/json_config.sh@352 -- # break 00:04:10.344 10:46:01 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:10.344 10:46:01 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:10.344 10:46:01 json_config -- json_config/common.sh@31 -- # local app=target 00:04:10.344 10:46:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:10.344 10:46:01 json_config -- json_config/common.sh@35 -- # [[ -n 3017656 ]] 00:04:10.344 10:46:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3017656 00:04:10.344 10:46:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:10.344 10:46:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.344 10:46:01 json_config -- json_config/common.sh@41 -- # kill -0 3017656 00:04:10.344 10:46:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.916 10:46:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.916 10:46:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.916 10:46:02 json_config -- json_config/common.sh@41 -- # kill -0 3017656 00:04:10.916 10:46:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:10.916 10:46:02 json_config -- json_config/common.sh@43 -- # break 00:04:10.916 10:46:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:10.916 10:46:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:10.916 SPDK target shutdown done 00:04:10.916 10:46:02 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:10.916 INFO: relaunching applications... 00:04:10.916 10:46:02 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.916 10:46:02 json_config -- json_config/common.sh@9 -- # local app=target 00:04:10.916 10:46:02 json_config -- json_config/common.sh@10 -- # shift 00:04:10.916 10:46:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.916 10:46:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.916 10:46:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.916 10:46:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.916 10:46:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.916 10:46:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3018719 00:04:10.916 10:46:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.916 Waiting for target to run... 00:04:10.916 10:46:02 json_config -- json_config/common.sh@25 -- # waitforlisten 3018719 /var/tmp/spdk_tgt.sock 00:04:10.916 10:46:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.916 10:46:02 json_config -- common/autotest_common.sh@833 -- # '[' -z 3018719 ']' 00:04:10.916 10:46:02 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.916 10:46:02 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:10.916 10:46:02 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.916 10:46:02 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:10.916 10:46:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.916 [2024-11-06 10:46:02.148864] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:10.916 [2024-11-06 10:46:02.148924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018719 ] 00:04:11.175 [2024-11-06 10:46:02.551969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.175 [2024-11-06 10:46:02.586885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.746 [2024-11-06 10:46:03.101238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.746 [2024-11-06 10:46:03.133625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:12.006 10:46:03 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.006 10:46:03 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:12.006 10:46:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:12.006 00:04:12.006 10:46:03 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:12.006 10:46:03 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:12.006 INFO: Checking if target configuration is the same... 00:04:12.006 10:46:03 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.006 10:46:03 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:12.006 10:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.006 + '[' 2 -ne 2 ']' 00:04:12.006 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:12.006 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:12.006 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:12.006 +++ basename /dev/fd/62 00:04:12.006 ++ mktemp /tmp/62.XXX 00:04:12.006 + tmp_file_1=/tmp/62.mND 00:04:12.006 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.006 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:12.006 + tmp_file_2=/tmp/spdk_tgt_config.json.8CP 00:04:12.006 + ret=0 00:04:12.006 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.266 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.266 + diff -u /tmp/62.mND /tmp/spdk_tgt_config.json.8CP 00:04:12.266 + echo 'INFO: JSON config files are the same' 00:04:12.266 INFO: JSON config files are the same 00:04:12.266 + rm /tmp/62.mND /tmp/spdk_tgt_config.json.8CP 00:04:12.266 + exit 0 00:04:12.266 10:46:03 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:12.266 10:46:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:12.266 INFO: changing configuration and checking if this can be detected... 00:04:12.266 10:46:03 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:12.266 10:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:12.527 10:46:03 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.527 10:46:03 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:12.527 10:46:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.527 + '[' 2 -ne 2 ']' 00:04:12.527 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:12.527 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:12.527 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:12.527 +++ basename /dev/fd/62 00:04:12.527 ++ mktemp /tmp/62.XXX 00:04:12.527 + tmp_file_1=/tmp/62.dp4 00:04:12.527 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.527 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:12.527 + tmp_file_2=/tmp/spdk_tgt_config.json.zXa 00:04:12.527 + ret=0 00:04:12.527 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.787 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.787 + diff -u /tmp/62.dp4 /tmp/spdk_tgt_config.json.zXa 00:04:12.787 + ret=1 00:04:12.787 + echo '=== Start of file: /tmp/62.dp4 ===' 00:04:12.787 + cat /tmp/62.dp4 00:04:12.787 + echo '=== End of file: /tmp/62.dp4 ===' 00:04:12.787 + echo '' 00:04:12.787 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zXa ===' 00:04:12.787 + cat /tmp/spdk_tgt_config.json.zXa 00:04:12.787 + echo '=== End of file: /tmp/spdk_tgt_config.json.zXa ===' 00:04:12.787 + echo '' 00:04:12.787 + rm /tmp/62.dp4 /tmp/spdk_tgt_config.json.zXa 00:04:12.787 + exit 1 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:12.787 INFO: configuration change detected. 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:12.787 10:46:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.787 10:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@324 -- # [[ -n 3018719 ]] 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:12.787 10:46:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.787 10:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:12.787 10:46:04 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:12.788 10:46:04 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:12.788 10:46:04 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:12.788 10:46:04 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:12.788 10:46:04 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:12.788 10:46:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:12.788 10:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.788 10:46:04 json_config -- json_config/json_config.sh@330 -- # killprocess 3018719 00:04:12.788 10:46:04 json_config -- common/autotest_common.sh@952 -- # '[' -z 3018719 ']' 00:04:12.788 10:46:04 json_config -- common/autotest_common.sh@956 -- # kill -0 3018719 00:04:12.788 10:46:04 json_config -- common/autotest_common.sh@957 -- # uname 00:04:12.788 10:46:04 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:12.788 10:46:04 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3018719 00:04:13.049 10:46:04 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:13.049 10:46:04 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:13.049 10:46:04 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3018719' 00:04:13.049 killing process with pid 3018719 00:04:13.049 10:46:04 json_config -- common/autotest_common.sh@971 -- # kill 3018719 00:04:13.049 10:46:04 json_config -- common/autotest_common.sh@976 -- # wait 3018719 00:04:13.310 10:46:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.310 10:46:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:13.310 10:46:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:13.310 10:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.310 10:46:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:13.310 10:46:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:13.310 INFO: Success 00:04:13.310 00:04:13.310 real 0m7.549s 00:04:13.310 user 0m8.955s 00:04:13.310 sys 0m2.085s 00:04:13.310 10:46:04 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.310 10:46:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.310 ************************************ 00:04:13.310 END TEST json_config 00:04:13.310 ************************************ 00:04:13.310 10:46:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:13.310 10:46:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.310 10:46:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.310 10:46:04 -- common/autotest_common.sh@10 -- # set +x 00:04:13.310 ************************************ 00:04:13.310 START TEST json_config_extra_key 00:04:13.310 ************************************ 00:04:13.310 10:46:04 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:13.310 10:46:04 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:13.310 10:46:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:13.310 10:46:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:13.571 10:46:04 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:13.571 10:46:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.572 --rc genhtml_branch_coverage=1 00:04:13.572 --rc genhtml_function_coverage=1 00:04:13.572 --rc genhtml_legend=1 00:04:13.572 --rc geninfo_all_blocks=1 00:04:13.572 --rc geninfo_unexecuted_blocks=1 00:04:13.572 00:04:13.572 ' 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.572 --rc genhtml_branch_coverage=1 00:04:13.572 --rc genhtml_function_coverage=1 00:04:13.572 --rc genhtml_legend=1 00:04:13.572 --rc geninfo_all_blocks=1 00:04:13.572 --rc geninfo_unexecuted_blocks=1 00:04:13.572 00:04:13.572 ' 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.572 --rc genhtml_branch_coverage=1 00:04:13.572 --rc genhtml_function_coverage=1 00:04:13.572 --rc genhtml_legend=1 00:04:13.572 --rc geninfo_all_blocks=1 00:04:13.572 --rc geninfo_unexecuted_blocks=1 00:04:13.572 00:04:13.572 ' 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.572 --rc genhtml_branch_coverage=1 00:04:13.572 --rc genhtml_function_coverage=1 00:04:13.572 --rc genhtml_legend=1 00:04:13.572 --rc geninfo_all_blocks=1 00:04:13.572 --rc geninfo_unexecuted_blocks=1 00:04:13.572 00:04:13.572 ' 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.572 10:46:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.572 10:46:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.572 10:46:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.572 10:46:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.572 10:46:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:13.572 10:46:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:13.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:13.572 10:46:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:13.572 INFO: launching applications... 00:04:13.572 10:46:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3019390 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.572 Waiting for target to run... 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3019390 /var/tmp/spdk_tgt.sock 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3019390 ']' 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.572 10:46:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:13.572 10:46:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:13.572 [2024-11-06 10:46:04.911023] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:13.573 [2024-11-06 10:46:04.911076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019390 ] 00:04:13.833 [2024-11-06 10:46:05.200830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.833 [2024-11-06 10:46:05.230704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.403 10:46:05 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:14.403 10:46:05 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:14.403 00:04:14.403 10:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:14.403 INFO: shutting down applications... 00:04:14.403 10:46:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3019390 ]] 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3019390 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3019390 00:04:14.403 10:46:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.975 10:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.975 10:46:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.975 10:46:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3019390 00:04:14.975 10:46:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:14.975 10:46:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:14.975 10:46:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:14.975 10:46:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:14.975 SPDK target shutdown done 00:04:14.975 10:46:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:14.975 Success 00:04:14.975 00:04:14.975 real 0m1.570s 00:04:14.975 user 0m1.203s 00:04:14.975 sys 0m0.422s 00:04:14.975 10:46:06 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.975 10:46:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.975 ************************************ 00:04:14.975 END TEST json_config_extra_key 00:04:14.975 ************************************ 00:04:14.975 10:46:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:14.975 10:46:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.975 10:46:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.975 10:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:14.975 ************************************ 00:04:14.975 START TEST alias_rpc 00:04:14.975 ************************************ 00:04:14.975 10:46:06 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:14.975 * Looking for test storage... 00:04:14.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:14.975 10:46:06 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.236 10:46:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.236 --rc genhtml_branch_coverage=1 00:04:15.236 --rc genhtml_function_coverage=1 00:04:15.236 --rc genhtml_legend=1 00:04:15.236 --rc geninfo_all_blocks=1 00:04:15.236 --rc geninfo_unexecuted_blocks=1 00:04:15.236 00:04:15.236 ' 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.236 --rc genhtml_branch_coverage=1 00:04:15.236 --rc genhtml_function_coverage=1 00:04:15.236 --rc genhtml_legend=1 00:04:15.236 --rc geninfo_all_blocks=1 00:04:15.236 --rc geninfo_unexecuted_blocks=1 00:04:15.236 00:04:15.236 ' 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.236 --rc genhtml_branch_coverage=1 00:04:15.236 --rc genhtml_function_coverage=1 00:04:15.236 --rc genhtml_legend=1 00:04:15.236 --rc geninfo_all_blocks=1 00:04:15.236 --rc geninfo_unexecuted_blocks=1 00:04:15.236 00:04:15.236 ' 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.236 --rc genhtml_branch_coverage=1 00:04:15.236 --rc genhtml_function_coverage=1 00:04:15.236 --rc genhtml_legend=1 00:04:15.236 --rc geninfo_all_blocks=1 00:04:15.236 --rc geninfo_unexecuted_blocks=1 00:04:15.236 00:04:15.236 ' 00:04:15.236 10:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:15.236 10:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3019787 00:04:15.236 10:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3019787 00:04:15.236 10:46:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3019787 ']' 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:15.236 10:46:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.236 [2024-11-06 10:46:06.565036] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:15.236 [2024-11-06 10:46:06.565111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019787 ] 00:04:15.236 [2024-11-06 10:46:06.642437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.496 [2024-11-06 10:46:06.684179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.067 10:46:07 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:16.067 10:46:07 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:16.067 10:46:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:16.327 10:46:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3019787 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3019787 ']' 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3019787 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3019787 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3019787' 00:04:16.327 killing process with pid 3019787 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@971 -- # kill 3019787 00:04:16.327 10:46:07 alias_rpc -- common/autotest_common.sh@976 -- # wait 3019787 00:04:16.588 00:04:16.588 real 0m1.548s 00:04:16.588 user 0m1.729s 00:04:16.588 sys 0m0.410s 00:04:16.588 10:46:07 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.588 10:46:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.588 ************************************ 00:04:16.588 END TEST alias_rpc 00:04:16.588 ************************************ 00:04:16.588 10:46:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:16.588 10:46:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:16.588 10:46:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.588 10:46:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.589 10:46:07 -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 ************************************ 00:04:16.589 START TEST spdkcli_tcp 00:04:16.589 ************************************ 00:04:16.589 10:46:07 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:16.849 * Looking for test storage... 00:04:16.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:16.849 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.849 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.849 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.849 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.849 10:46:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.850 10:46:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.850 --rc genhtml_branch_coverage=1 00:04:16.850 --rc genhtml_function_coverage=1 00:04:16.850 --rc genhtml_legend=1 00:04:16.850 --rc geninfo_all_blocks=1 00:04:16.850 --rc geninfo_unexecuted_blocks=1 00:04:16.850 00:04:16.850 ' 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.850 --rc genhtml_branch_coverage=1 00:04:16.850 --rc genhtml_function_coverage=1 00:04:16.850 --rc genhtml_legend=1 00:04:16.850 --rc geninfo_all_blocks=1 00:04:16.850 --rc geninfo_unexecuted_blocks=1 00:04:16.850 00:04:16.850 ' 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.850 --rc genhtml_branch_coverage=1 00:04:16.850 --rc genhtml_function_coverage=1 00:04:16.850 --rc genhtml_legend=1 00:04:16.850 --rc geninfo_all_blocks=1 00:04:16.850 --rc geninfo_unexecuted_blocks=1 00:04:16.850 00:04:16.850 ' 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.850 --rc genhtml_branch_coverage=1 00:04:16.850 --rc genhtml_function_coverage=1 00:04:16.850 --rc genhtml_legend=1 00:04:16.850 --rc geninfo_all_blocks=1 00:04:16.850 --rc geninfo_unexecuted_blocks=1 00:04:16.850 00:04:16.850 ' 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3020182 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3020182 00:04:16.850 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3020182 ']' 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.850 10:46:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.850 [2024-11-06 10:46:08.178906] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:16.850 [2024-11-06 10:46:08.178977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020182 ] 00:04:16.850 [2024-11-06 10:46:08.254173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.110 [2024-11-06 10:46:08.297605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.110 [2024-11-06 10:46:08.297609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.681 10:46:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:17.681 10:46:08 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:17.681 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3020417 00:04:17.681 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:17.681 10:46:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:17.943 [ 00:04:17.943 "bdev_malloc_delete", 00:04:17.943 "bdev_malloc_create", 00:04:17.943 "bdev_null_resize", 00:04:17.943 "bdev_null_delete", 00:04:17.943 "bdev_null_create", 00:04:17.943 "bdev_nvme_cuse_unregister", 00:04:17.943 "bdev_nvme_cuse_register", 00:04:17.943 "bdev_opal_new_user", 00:04:17.943 "bdev_opal_set_lock_state", 00:04:17.943 "bdev_opal_delete", 00:04:17.943 "bdev_opal_get_info", 00:04:17.943 "bdev_opal_create", 00:04:17.943 "bdev_nvme_opal_revert", 00:04:17.943 "bdev_nvme_opal_init", 00:04:17.943 "bdev_nvme_send_cmd", 00:04:17.943 "bdev_nvme_set_keys", 00:04:17.943 "bdev_nvme_get_path_iostat", 00:04:17.943 "bdev_nvme_get_mdns_discovery_info", 00:04:17.943 "bdev_nvme_stop_mdns_discovery", 00:04:17.943 "bdev_nvme_start_mdns_discovery", 00:04:17.943 "bdev_nvme_set_multipath_policy", 00:04:17.943 "bdev_nvme_set_preferred_path", 00:04:17.943 "bdev_nvme_get_io_paths", 00:04:17.943 "bdev_nvme_remove_error_injection", 00:04:17.943 "bdev_nvme_add_error_injection", 00:04:17.943 "bdev_nvme_get_discovery_info", 00:04:17.943 "bdev_nvme_stop_discovery", 00:04:17.943 "bdev_nvme_start_discovery", 00:04:17.943 "bdev_nvme_get_controller_health_info", 00:04:17.943 "bdev_nvme_disable_controller", 00:04:17.943 "bdev_nvme_enable_controller", 00:04:17.943 "bdev_nvme_reset_controller", 00:04:17.943 "bdev_nvme_get_transport_statistics", 00:04:17.943 "bdev_nvme_apply_firmware", 00:04:17.943 "bdev_nvme_detach_controller", 00:04:17.943 "bdev_nvme_get_controllers", 00:04:17.943 "bdev_nvme_attach_controller", 00:04:17.943 "bdev_nvme_set_hotplug", 00:04:17.943 "bdev_nvme_set_options", 00:04:17.943 "bdev_passthru_delete", 00:04:17.943 "bdev_passthru_create", 00:04:17.943 "bdev_lvol_set_parent_bdev", 00:04:17.943 "bdev_lvol_set_parent", 00:04:17.943 "bdev_lvol_check_shallow_copy", 00:04:17.943 "bdev_lvol_start_shallow_copy", 00:04:17.943 "bdev_lvol_grow_lvstore", 00:04:17.943 "bdev_lvol_get_lvols", 00:04:17.943 "bdev_lvol_get_lvstores", 00:04:17.943 "bdev_lvol_delete", 00:04:17.943 "bdev_lvol_set_read_only", 00:04:17.943 "bdev_lvol_resize", 00:04:17.943 "bdev_lvol_decouple_parent", 00:04:17.943 "bdev_lvol_inflate", 00:04:17.943 "bdev_lvol_rename", 00:04:17.943 "bdev_lvol_clone_bdev", 00:04:17.943 "bdev_lvol_clone", 00:04:17.943 "bdev_lvol_snapshot", 00:04:17.943 "bdev_lvol_create", 00:04:17.943 "bdev_lvol_delete_lvstore", 00:04:17.943 "bdev_lvol_rename_lvstore", 00:04:17.943 "bdev_lvol_create_lvstore", 00:04:17.943 "bdev_raid_set_options", 00:04:17.943 "bdev_raid_remove_base_bdev", 00:04:17.943 "bdev_raid_add_base_bdev", 00:04:17.943 "bdev_raid_delete", 00:04:17.943 "bdev_raid_create", 00:04:17.943 "bdev_raid_get_bdevs", 00:04:17.943 "bdev_error_inject_error", 00:04:17.943 "bdev_error_delete", 00:04:17.943 "bdev_error_create", 00:04:17.943 "bdev_split_delete", 00:04:17.943 "bdev_split_create", 00:04:17.943 "bdev_delay_delete", 00:04:17.943 "bdev_delay_create", 00:04:17.943 "bdev_delay_update_latency", 00:04:17.943 "bdev_zone_block_delete", 00:04:17.943 "bdev_zone_block_create", 00:04:17.943 "blobfs_create", 00:04:17.943 "blobfs_detect", 00:04:17.943 "blobfs_set_cache_size", 00:04:17.943 "bdev_aio_delete", 00:04:17.943 "bdev_aio_rescan", 00:04:17.943 "bdev_aio_create", 00:04:17.943 "bdev_ftl_set_property", 00:04:17.943 "bdev_ftl_get_properties", 00:04:17.943 "bdev_ftl_get_stats", 00:04:17.943 "bdev_ftl_unmap", 00:04:17.943 "bdev_ftl_unload", 00:04:17.943 "bdev_ftl_delete", 00:04:17.943 "bdev_ftl_load", 00:04:17.943 "bdev_ftl_create", 00:04:17.943 "bdev_virtio_attach_controller", 00:04:17.943 "bdev_virtio_scsi_get_devices", 00:04:17.943 "bdev_virtio_detach_controller", 00:04:17.943 "bdev_virtio_blk_set_hotplug", 00:04:17.943 "bdev_iscsi_delete", 00:04:17.943 "bdev_iscsi_create", 00:04:17.943 "bdev_iscsi_set_options", 00:04:17.943 "accel_error_inject_error", 00:04:17.943 "ioat_scan_accel_module", 00:04:17.943 "dsa_scan_accel_module", 00:04:17.943 "iaa_scan_accel_module", 00:04:17.943 "vfu_virtio_create_fs_endpoint", 00:04:17.943 "vfu_virtio_create_scsi_endpoint", 00:04:17.943 "vfu_virtio_scsi_remove_target", 00:04:17.943 "vfu_virtio_scsi_add_target", 00:04:17.943 "vfu_virtio_create_blk_endpoint", 00:04:17.943 "vfu_virtio_delete_endpoint", 00:04:17.943 "keyring_file_remove_key", 00:04:17.943 "keyring_file_add_key", 00:04:17.943 "keyring_linux_set_options", 00:04:17.943 "fsdev_aio_delete", 00:04:17.943 "fsdev_aio_create", 00:04:17.943 "iscsi_get_histogram", 00:04:17.943 "iscsi_enable_histogram", 00:04:17.943 "iscsi_set_options", 00:04:17.943 "iscsi_get_auth_groups", 00:04:17.943 "iscsi_auth_group_remove_secret", 00:04:17.943 "iscsi_auth_group_add_secret", 00:04:17.943 "iscsi_delete_auth_group", 00:04:17.943 "iscsi_create_auth_group", 00:04:17.944 "iscsi_set_discovery_auth", 00:04:17.944 "iscsi_get_options", 00:04:17.944 "iscsi_target_node_request_logout", 00:04:17.944 "iscsi_target_node_set_redirect", 00:04:17.944 "iscsi_target_node_set_auth", 00:04:17.944 "iscsi_target_node_add_lun", 00:04:17.944 "iscsi_get_stats", 00:04:17.944 "iscsi_get_connections", 00:04:17.944 "iscsi_portal_group_set_auth", 00:04:17.944 "iscsi_start_portal_group", 00:04:17.944 "iscsi_delete_portal_group", 00:04:17.944 "iscsi_create_portal_group", 00:04:17.944 "iscsi_get_portal_groups", 00:04:17.944 "iscsi_delete_target_node", 00:04:17.944 "iscsi_target_node_remove_pg_ig_maps", 00:04:17.944 "iscsi_target_node_add_pg_ig_maps", 00:04:17.944 "iscsi_create_target_node", 00:04:17.944 "iscsi_get_target_nodes", 00:04:17.944 "iscsi_delete_initiator_group", 00:04:17.944 "iscsi_initiator_group_remove_initiators", 00:04:17.944 "iscsi_initiator_group_add_initiators", 00:04:17.944 "iscsi_create_initiator_group", 00:04:17.944 "iscsi_get_initiator_groups", 00:04:17.944 "nvmf_set_crdt", 00:04:17.944 "nvmf_set_config", 00:04:17.944 "nvmf_set_max_subsystems", 00:04:17.944 "nvmf_stop_mdns_prr", 00:04:17.944 "nvmf_publish_mdns_prr", 00:04:17.944 "nvmf_subsystem_get_listeners", 00:04:17.944 "nvmf_subsystem_get_qpairs", 00:04:17.944 "nvmf_subsystem_get_controllers", 00:04:17.944 "nvmf_get_stats", 00:04:17.944 "nvmf_get_transports", 00:04:17.944 "nvmf_create_transport", 00:04:17.944 "nvmf_get_targets", 00:04:17.944 "nvmf_delete_target", 00:04:17.944 "nvmf_create_target", 00:04:17.944 "nvmf_subsystem_allow_any_host", 00:04:17.944 "nvmf_subsystem_set_keys", 00:04:17.944 "nvmf_subsystem_remove_host", 00:04:17.944 "nvmf_subsystem_add_host", 00:04:17.944 "nvmf_ns_remove_host", 00:04:17.944 "nvmf_ns_add_host", 00:04:17.944 "nvmf_subsystem_remove_ns", 00:04:17.944 "nvmf_subsystem_set_ns_ana_group", 00:04:17.944 "nvmf_subsystem_add_ns", 00:04:17.944 "nvmf_subsystem_listener_set_ana_state", 00:04:17.944 "nvmf_discovery_get_referrals", 00:04:17.944 "nvmf_discovery_remove_referral", 00:04:17.944 "nvmf_discovery_add_referral", 00:04:17.944 "nvmf_subsystem_remove_listener", 00:04:17.944 "nvmf_subsystem_add_listener", 00:04:17.944 "nvmf_delete_subsystem", 00:04:17.944 "nvmf_create_subsystem", 00:04:17.944 "nvmf_get_subsystems", 00:04:17.944 "env_dpdk_get_mem_stats", 00:04:17.944 "nbd_get_disks", 00:04:17.944 "nbd_stop_disk", 00:04:17.944 "nbd_start_disk", 00:04:17.944 "ublk_recover_disk", 00:04:17.944 "ublk_get_disks", 00:04:17.944 "ublk_stop_disk", 00:04:17.944 "ublk_start_disk", 00:04:17.944 "ublk_destroy_target", 00:04:17.944 "ublk_create_target", 00:04:17.944 "virtio_blk_create_transport", 00:04:17.944 "virtio_blk_get_transports", 00:04:17.944 "vhost_controller_set_coalescing", 00:04:17.944 "vhost_get_controllers", 00:04:17.944 "vhost_delete_controller", 00:04:17.944 "vhost_create_blk_controller", 00:04:17.944 "vhost_scsi_controller_remove_target", 00:04:17.944 "vhost_scsi_controller_add_target", 00:04:17.944 "vhost_start_scsi_controller", 00:04:17.944 "vhost_create_scsi_controller", 00:04:17.944 "thread_set_cpumask", 00:04:17.944 "scheduler_set_options", 00:04:17.944 "framework_get_governor", 00:04:17.944 "framework_get_scheduler", 00:04:17.944 "framework_set_scheduler", 00:04:17.944 "framework_get_reactors", 00:04:17.944 "thread_get_io_channels", 00:04:17.944 "thread_get_pollers", 00:04:17.944 "thread_get_stats", 00:04:17.944 "framework_monitor_context_switch", 00:04:17.944 "spdk_kill_instance", 00:04:17.944 "log_enable_timestamps", 00:04:17.944 "log_get_flags", 00:04:17.944 "log_clear_flag", 00:04:17.944 "log_set_flag", 00:04:17.944 "log_get_level", 00:04:17.944 "log_set_level", 00:04:17.944 "log_get_print_level", 00:04:17.944 "log_set_print_level", 00:04:17.944 "framework_enable_cpumask_locks", 00:04:17.944 "framework_disable_cpumask_locks", 00:04:17.944 "framework_wait_init", 00:04:17.944 "framework_start_init", 00:04:17.944 "scsi_get_devices", 00:04:17.944 "bdev_get_histogram", 00:04:17.944 "bdev_enable_histogram", 00:04:17.944 "bdev_set_qos_limit", 00:04:17.944 "bdev_set_qd_sampling_period", 00:04:17.944 "bdev_get_bdevs", 00:04:17.944 "bdev_reset_iostat", 00:04:17.944 "bdev_get_iostat", 00:04:17.944 "bdev_examine", 00:04:17.944 "bdev_wait_for_examine", 00:04:17.944 "bdev_set_options", 00:04:17.944 "accel_get_stats", 00:04:17.944 "accel_set_options", 00:04:17.944 "accel_set_driver", 00:04:17.944 "accel_crypto_key_destroy", 00:04:17.944 "accel_crypto_keys_get", 00:04:17.944 "accel_crypto_key_create", 00:04:17.944 "accel_assign_opc", 00:04:17.944 "accel_get_module_info", 00:04:17.944 "accel_get_opc_assignments", 00:04:17.944 "vmd_rescan", 00:04:17.944 "vmd_remove_device", 00:04:17.944 "vmd_enable", 00:04:17.944 "sock_get_default_impl", 00:04:17.944 "sock_set_default_impl", 00:04:17.944 "sock_impl_set_options", 00:04:17.944 "sock_impl_get_options", 00:04:17.944 "iobuf_get_stats", 00:04:17.944 "iobuf_set_options", 00:04:17.944 "keyring_get_keys", 00:04:17.944 "vfu_tgt_set_base_path", 00:04:17.944 "framework_get_pci_devices", 00:04:17.944 "framework_get_config", 00:04:17.944 "framework_get_subsystems", 00:04:17.944 "fsdev_set_opts", 00:04:17.944 "fsdev_get_opts", 00:04:17.944 "trace_get_info", 00:04:17.944 "trace_get_tpoint_group_mask", 00:04:17.944 "trace_disable_tpoint_group", 00:04:17.944 "trace_enable_tpoint_group", 00:04:17.944 "trace_clear_tpoint_mask", 00:04:17.944 "trace_set_tpoint_mask", 00:04:17.944 "notify_get_notifications", 00:04:17.944 "notify_get_types", 00:04:17.944 "spdk_get_version", 00:04:17.944 "rpc_get_methods" 00:04:17.944 ] 00:04:17.944 10:46:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.944 10:46:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:17.944 10:46:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3020182 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3020182 ']' 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3020182 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3020182 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3020182' 00:04:17.944 killing process with pid 3020182 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3020182 00:04:17.944 10:46:09 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3020182 00:04:18.206 00:04:18.206 real 0m1.524s 00:04:18.206 user 0m2.780s 00:04:18.206 sys 0m0.450s 00:04:18.206 10:46:09 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.206 10:46:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.206 ************************************ 00:04:18.206 END TEST spdkcli_tcp 00:04:18.206 ************************************ 00:04:18.206 10:46:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.206 10:46:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.206 10:46:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.206 10:46:09 -- common/autotest_common.sh@10 -- # set +x 00:04:18.206 ************************************ 00:04:18.206 START TEST dpdk_mem_utility 00:04:18.206 ************************************ 00:04:18.206 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.206 * Looking for test storage... 00:04:18.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:18.206 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.206 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.206 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.466 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.466 10:46:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.467 10:46:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.467 --rc genhtml_branch_coverage=1 00:04:18.467 --rc genhtml_function_coverage=1 00:04:18.467 --rc genhtml_legend=1 00:04:18.467 --rc geninfo_all_blocks=1 00:04:18.467 --rc geninfo_unexecuted_blocks=1 00:04:18.467 00:04:18.467 ' 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.467 --rc genhtml_branch_coverage=1 00:04:18.467 --rc genhtml_function_coverage=1 00:04:18.467 --rc genhtml_legend=1 00:04:18.467 --rc geninfo_all_blocks=1 00:04:18.467 --rc geninfo_unexecuted_blocks=1 00:04:18.467 00:04:18.467 ' 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.467 --rc genhtml_branch_coverage=1 00:04:18.467 --rc genhtml_function_coverage=1 00:04:18.467 --rc genhtml_legend=1 00:04:18.467 --rc geninfo_all_blocks=1 00:04:18.467 --rc geninfo_unexecuted_blocks=1 00:04:18.467 00:04:18.467 ' 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.467 --rc genhtml_branch_coverage=1 00:04:18.467 --rc genhtml_function_coverage=1 00:04:18.467 --rc genhtml_legend=1 00:04:18.467 --rc geninfo_all_blocks=1 00:04:18.467 --rc geninfo_unexecuted_blocks=1 00:04:18.467 00:04:18.467 ' 00:04:18.467 10:46:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:18.467 10:46:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3020594 00:04:18.467 10:46:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.467 10:46:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3020594 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3020594 ']' 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:18.467 10:46:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.467 [2024-11-06 10:46:09.768552] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:18.467 [2024-11-06 10:46:09.768609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020594 ] 00:04:18.467 [2024-11-06 10:46:09.840878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.467 [2024-11-06 10:46:09.876474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.409 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:19.409 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:19.409 10:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:19.409 10:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:19.409 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.409 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.409 { 00:04:19.409 "filename": "/tmp/spdk_mem_dump.txt" 00:04:19.409 } 00:04:19.409 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.409 10:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.409 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:19.409 1 heaps totaling size 810.000000 MiB 00:04:19.409 size: 810.000000 MiB heap id: 0 00:04:19.409 end heaps---------- 00:04:19.409 9 mempools totaling size 595.772034 MiB 00:04:19.409 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:19.409 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:19.409 size: 92.545471 MiB name: bdev_io_3020594 00:04:19.409 size: 50.003479 MiB name: msgpool_3020594 00:04:19.409 size: 36.509338 MiB name: fsdev_io_3020594 00:04:19.409 size: 21.763794 MiB name: PDU_Pool 00:04:19.409 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:19.409 size: 4.133484 MiB name: evtpool_3020594 00:04:19.409 size: 0.026123 MiB name: Session_Pool 00:04:19.409 end mempools------- 00:04:19.409 6 memzones totaling size 4.142822 MiB 00:04:19.410 size: 1.000366 MiB name: RG_ring_0_3020594 00:04:19.410 size: 1.000366 MiB name: RG_ring_1_3020594 00:04:19.410 size: 1.000366 MiB name: RG_ring_4_3020594 00:04:19.410 size: 1.000366 MiB name: RG_ring_5_3020594 00:04:19.410 size: 0.125366 MiB name: RG_ring_2_3020594 00:04:19.410 size: 0.015991 MiB name: RG_ring_3_3020594 00:04:19.410 end memzones------- 00:04:19.410 10:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:19.410 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:19.410 list of free elements. size: 10.862488 MiB 00:04:19.410 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:19.410 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:19.410 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:19.410 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:19.410 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:19.410 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:19.410 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:19.410 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:19.410 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:19.410 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:19.410 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:19.410 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:19.410 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:19.410 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:19.410 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:19.410 list of standard malloc elements. size: 199.218628 MiB 00:04:19.410 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:19.410 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:19.410 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:19.410 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:19.410 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:19.410 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:19.410 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:19.410 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:19.410 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:19.410 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:19.410 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:19.410 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:19.410 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:19.410 list of memzone associated elements. size: 599.918884 MiB 00:04:19.410 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:19.410 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:19.410 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:19.410 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:19.410 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:19.410 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3020594_0 00:04:19.410 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:19.410 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3020594_0 00:04:19.410 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:19.410 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3020594_0 00:04:19.410 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:19.410 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:19.410 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:19.410 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:19.410 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:19.410 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3020594_0 00:04:19.410 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:19.410 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3020594 00:04:19.410 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:19.410 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3020594 00:04:19.410 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:19.410 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:19.410 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:19.410 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:19.410 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:19.410 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:19.410 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:19.410 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:19.410 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:19.410 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3020594 00:04:19.410 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:19.410 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3020594 00:04:19.410 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:19.410 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3020594 00:04:19.410 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:19.410 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3020594 00:04:19.410 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:19.410 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3020594 00:04:19.410 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:19.410 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3020594 00:04:19.410 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:19.410 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:19.410 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:19.410 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:19.410 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:19.410 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:19.410 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:19.410 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3020594 00:04:19.410 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:19.410 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3020594 00:04:19.410 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:19.410 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:19.410 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:19.410 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:19.410 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:19.410 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3020594 00:04:19.410 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:19.410 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:19.410 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:19.410 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3020594 00:04:19.410 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:19.410 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3020594 00:04:19.410 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:19.410 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3020594 00:04:19.410 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:19.411 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:19.411 10:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:19.411 10:46:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3020594 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3020594 ']' 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3020594 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3020594 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3020594' 00:04:19.411 killing process with pid 3020594 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3020594 00:04:19.411 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3020594 00:04:19.672 00:04:19.672 real 0m1.431s 00:04:19.672 user 0m1.528s 00:04:19.672 sys 0m0.407s 00:04:19.672 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.672 10:46:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.672 ************************************ 00:04:19.672 END TEST dpdk_mem_utility 00:04:19.672 ************************************ 00:04:19.672 10:46:10 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:19.672 10:46:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.672 10:46:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.672 10:46:10 -- common/autotest_common.sh@10 -- # set +x 00:04:19.672 ************************************ 00:04:19.672 START TEST event 00:04:19.672 ************************************ 00:04:19.672 10:46:11 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:19.933 * Looking for test storage... 00:04:19.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.933 10:46:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.933 10:46:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.933 10:46:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.933 10:46:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.933 10:46:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.933 10:46:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.933 10:46:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.933 10:46:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.933 10:46:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.933 10:46:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.933 10:46:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.933 10:46:11 event -- scripts/common.sh@344 -- # case "$op" in 00:04:19.933 10:46:11 event -- scripts/common.sh@345 -- # : 1 00:04:19.933 10:46:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.933 10:46:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.933 10:46:11 event -- scripts/common.sh@365 -- # decimal 1 00:04:19.933 10:46:11 event -- scripts/common.sh@353 -- # local d=1 00:04:19.933 10:46:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.933 10:46:11 event -- scripts/common.sh@355 -- # echo 1 00:04:19.933 10:46:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.933 10:46:11 event -- scripts/common.sh@366 -- # decimal 2 00:04:19.933 10:46:11 event -- scripts/common.sh@353 -- # local d=2 00:04:19.933 10:46:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.933 10:46:11 event -- scripts/common.sh@355 -- # echo 2 00:04:19.933 10:46:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.933 10:46:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.933 10:46:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.933 10:46:11 event -- scripts/common.sh@368 -- # return 0 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.933 --rc genhtml_branch_coverage=1 00:04:19.933 --rc genhtml_function_coverage=1 00:04:19.933 --rc genhtml_legend=1 00:04:19.933 --rc geninfo_all_blocks=1 00:04:19.933 --rc geninfo_unexecuted_blocks=1 00:04:19.933 00:04:19.933 ' 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.933 --rc genhtml_branch_coverage=1 00:04:19.933 --rc genhtml_function_coverage=1 00:04:19.933 --rc genhtml_legend=1 00:04:19.933 --rc geninfo_all_blocks=1 00:04:19.933 --rc geninfo_unexecuted_blocks=1 00:04:19.933 00:04:19.933 ' 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.933 --rc genhtml_branch_coverage=1 00:04:19.933 --rc genhtml_function_coverage=1 00:04:19.933 --rc genhtml_legend=1 00:04:19.933 --rc geninfo_all_blocks=1 00:04:19.933 --rc geninfo_unexecuted_blocks=1 00:04:19.933 00:04:19.933 ' 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.933 --rc genhtml_branch_coverage=1 00:04:19.933 --rc genhtml_function_coverage=1 00:04:19.933 --rc genhtml_legend=1 00:04:19.933 --rc geninfo_all_blocks=1 00:04:19.933 --rc geninfo_unexecuted_blocks=1 00:04:19.933 00:04:19.933 ' 00:04:19.933 10:46:11 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:19.933 10:46:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:19.933 10:46:11 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:19.933 10:46:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.933 10:46:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:19.933 ************************************ 00:04:19.933 START TEST event_perf 00:04:19.933 ************************************ 00:04:19.933 10:46:11 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:19.933 Running I/O for 1 seconds...[2024-11-06 10:46:11.288525] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:19.933 [2024-11-06 10:46:11.288639] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020997 ] 00:04:20.194 [2024-11-06 10:46:11.369227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.194 [2024-11-06 10:46:11.415088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.194 [2024-11-06 10:46:11.415209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.194 [2024-11-06 10:46:11.415368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.194 Running I/O for 1 seconds...[2024-11-06 10:46:11.415368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:21.136 00:04:21.136 lcore 0: 177462 00:04:21.136 lcore 1: 177463 00:04:21.136 lcore 2: 177463 00:04:21.136 lcore 3: 177466 00:04:21.136 done. 00:04:21.136 00:04:21.136 real 0m1.183s 00:04:21.136 user 0m4.092s 00:04:21.136 sys 0m0.088s 00:04:21.136 10:46:12 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.136 10:46:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:21.136 ************************************ 00:04:21.136 END TEST event_perf 00:04:21.136 ************************************ 00:04:21.136 10:46:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:21.136 10:46:12 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:21.136 10:46:12 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.136 10:46:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.136 ************************************ 00:04:21.136 START TEST event_reactor 00:04:21.136 ************************************ 00:04:21.136 10:46:12 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:21.136 [2024-11-06 10:46:12.550301] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:21.136 [2024-11-06 10:46:12.550396] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021355 ] 00:04:21.397 [2024-11-06 10:46:12.625652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.397 [2024-11-06 10:46:12.660452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.341 test_start 00:04:22.341 oneshot 00:04:22.341 tick 100 00:04:22.341 tick 100 00:04:22.341 tick 250 00:04:22.341 tick 100 00:04:22.341 tick 100 00:04:22.341 tick 250 00:04:22.341 tick 100 00:04:22.341 tick 500 00:04:22.341 tick 100 00:04:22.341 tick 100 00:04:22.341 tick 250 00:04:22.341 tick 100 00:04:22.341 tick 100 00:04:22.341 test_end 00:04:22.341 00:04:22.341 real 0m1.164s 00:04:22.341 user 0m1.096s 00:04:22.341 sys 0m0.065s 00:04:22.341 10:46:13 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.341 10:46:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:22.341 ************************************ 00:04:22.341 END TEST event_reactor 00:04:22.341 ************************************ 00:04:22.341 10:46:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:22.341 10:46:13 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:22.341 10:46:13 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.341 10:46:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.696 ************************************ 00:04:22.696 START TEST event_reactor_perf 00:04:22.696 ************************************ 00:04:22.696 10:46:13 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:22.696 [2024-11-06 10:46:13.780585] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:22.696 [2024-11-06 10:46:13.780633] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021501 ] 00:04:22.696 [2024-11-06 10:46:13.849672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.696 [2024-11-06 10:46:13.884248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.638 test_start 00:04:23.638 test_end 00:04:23.638 Performance: 370334 events per second 00:04:23.638 00:04:23.638 real 0m1.144s 00:04:23.638 user 0m1.084s 00:04:23.638 sys 0m0.057s 00:04:23.638 10:46:14 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.638 10:46:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:23.638 ************************************ 00:04:23.638 END TEST event_reactor_perf 00:04:23.638 ************************************ 00:04:23.638 10:46:14 event -- event/event.sh@49 -- # uname -s 00:04:23.638 10:46:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:23.638 10:46:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:23.638 10:46:14 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.638 10:46:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.638 10:46:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.638 ************************************ 00:04:23.638 START TEST event_scheduler 00:04:23.638 ************************************ 00:04:23.638 10:46:14 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:23.901 * Looking for test storage... 00:04:23.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.901 10:46:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.901 --rc genhtml_branch_coverage=1 00:04:23.901 --rc genhtml_function_coverage=1 00:04:23.901 --rc genhtml_legend=1 00:04:23.901 --rc geninfo_all_blocks=1 00:04:23.901 --rc geninfo_unexecuted_blocks=1 00:04:23.901 00:04:23.901 ' 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.901 --rc genhtml_branch_coverage=1 00:04:23.901 --rc genhtml_function_coverage=1 00:04:23.901 --rc genhtml_legend=1 00:04:23.901 --rc geninfo_all_blocks=1 00:04:23.901 --rc geninfo_unexecuted_blocks=1 00:04:23.901 00:04:23.901 ' 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.901 --rc genhtml_branch_coverage=1 00:04:23.901 --rc genhtml_function_coverage=1 00:04:23.901 --rc genhtml_legend=1 00:04:23.901 --rc geninfo_all_blocks=1 00:04:23.901 --rc geninfo_unexecuted_blocks=1 00:04:23.901 00:04:23.901 ' 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.901 --rc genhtml_branch_coverage=1 00:04:23.901 --rc genhtml_function_coverage=1 00:04:23.901 --rc genhtml_legend=1 00:04:23.901 --rc geninfo_all_blocks=1 00:04:23.901 --rc geninfo_unexecuted_blocks=1 00:04:23.901 00:04:23.901 ' 00:04:23.901 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:23.901 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3021781 00:04:23.901 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.901 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3021781 00:04:23.901 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3021781 ']' 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:23.901 10:46:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:23.901 [2024-11-06 10:46:15.242887] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:23.901 [2024-11-06 10:46:15.242951] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021781 ] 00:04:23.901 [2024-11-06 10:46:15.307386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:24.163 [2024-11-06 10:46:15.346499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.163 [2024-11-06 10:46:15.346656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.163 [2024-11-06 10:46:15.346812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.163 [2024-11-06 10:46:15.346816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:24.163 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 [2024-11-06 10:46:15.375411] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:24.163 [2024-11-06 10:46:15.375425] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:24.163 [2024-11-06 10:46:15.375434] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:24.163 [2024-11-06 10:46:15.375439] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:24.163 [2024-11-06 10:46:15.375443] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 [2024-11-06 10:46:15.432975] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 ************************************ 00:04:24.163 START TEST scheduler_create_thread 00:04:24.163 ************************************ 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 2 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 3 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 4 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 5 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 6 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 7 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 8 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.163 9 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.163 10:46:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.735 10 00:04:24.735 10:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.735 10:46:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:24.735 10:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.735 10:46:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.121 10:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.121 10:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:26.121 10:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:26.121 10:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.121 10:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.063 10:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.063 10:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:27.063 10:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.063 10:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.632 10:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.632 10:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:27.632 10:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:27.632 10:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.632 10:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.572 10:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.572 00:04:28.572 real 0m4.224s 00:04:28.572 user 0m0.024s 00:04:28.572 sys 0m0.008s 00:04:28.572 10:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.572 10:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.572 ************************************ 00:04:28.572 END TEST scheduler_create_thread 00:04:28.572 ************************************ 00:04:28.572 10:46:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:28.572 10:46:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3021781 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3021781 ']' 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3021781 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3021781 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3021781' 00:04:28.572 killing process with pid 3021781 00:04:28.572 10:46:19 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3021781 00:04:28.573 10:46:19 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3021781 00:04:28.833 [2024-11-06 10:46:20.074409] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:28.833 00:04:28.834 real 0m5.235s 00:04:28.834 user 0m11.108s 00:04:28.834 sys 0m0.359s 00:04:28.834 10:46:20 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.834 10:46:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.834 ************************************ 00:04:28.834 END TEST event_scheduler 00:04:28.834 ************************************ 00:04:29.094 10:46:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:29.094 10:46:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:29.094 10:46:20 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.094 10:46:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.094 10:46:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.094 ************************************ 00:04:29.094 START TEST app_repeat 00:04:29.094 ************************************ 00:04:29.094 10:46:20 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3022869 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3022869' 00:04:29.094 Process app_repeat pid: 3022869 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:29.094 spdk_app_start Round 0 00:04:29.094 10:46:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3022869 /var/tmp/spdk-nbd.sock 00:04:29.094 10:46:20 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3022869 ']' 00:04:29.094 10:46:20 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.094 10:46:20 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.094 10:46:20 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.094 10:46:20 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.094 10:46:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.094 [2024-11-06 10:46:20.358520] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:29.094 [2024-11-06 10:46:20.358619] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022869 ] 00:04:29.094 [2024-11-06 10:46:20.431728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.094 [2024-11-06 10:46:20.472761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.094 [2024-11-06 10:46:20.472778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.355 10:46:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:29.355 10:46:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:29.355 10:46:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.355 Malloc0 00:04:29.355 10:46:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.615 Malloc1 00:04:29.615 10:46:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.615 10:46:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.877 /dev/nbd0 00:04:29.877 10:46:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.877 10:46:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.877 1+0 records in 00:04:29.877 1+0 records out 00:04:29.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271933 s, 15.1 MB/s 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:29.877 10:46:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:29.877 10:46:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.877 10:46:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.877 10:46:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.137 /dev/nbd1 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.137 1+0 records in 00:04:30.137 1+0 records out 00:04:30.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272362 s, 15.0 MB/s 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:30.137 10:46:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.137 { 00:04:30.137 "nbd_device": "/dev/nbd0", 00:04:30.137 "bdev_name": "Malloc0" 00:04:30.137 }, 00:04:30.137 { 00:04:30.137 "nbd_device": "/dev/nbd1", 00:04:30.137 "bdev_name": "Malloc1" 00:04:30.137 } 00:04:30.137 ]' 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.137 { 00:04:30.137 "nbd_device": "/dev/nbd0", 00:04:30.137 "bdev_name": "Malloc0" 00:04:30.137 }, 00:04:30.137 { 00:04:30.137 "nbd_device": "/dev/nbd1", 00:04:30.137 "bdev_name": "Malloc1" 00:04:30.137 } 00:04:30.137 ]' 00:04:30.137 10:46:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.398 10:46:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.398 /dev/nbd1' 00:04:30.398 10:46:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.398 /dev/nbd1' 00:04:30.398 10:46:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.398 10:46:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.398 10:46:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.399 256+0 records in 00:04:30.399 256+0 records out 00:04:30.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124795 s, 84.0 MB/s 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.399 256+0 records in 00:04:30.399 256+0 records out 00:04:30.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018304 s, 57.3 MB/s 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.399 256+0 records in 00:04:30.399 256+0 records out 00:04:30.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174761 s, 60.0 MB/s 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.399 10:46:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.660 10:46:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.660 10:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.920 10:46:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.920 10:46:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.181 10:46:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.181 [2024-11-06 10:46:22.555483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.181 [2024-11-06 10:46:22.590323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.181 [2024-11-06 10:46:22.590326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.441 [2024-11-06 10:46:22.621968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.441 [2024-11-06 10:46:22.622005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.740 10:46:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.740 10:46:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:34.740 spdk_app_start Round 1 00:04:34.740 10:46:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3022869 /var/tmp/spdk-nbd.sock 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3022869 ']' 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.740 10:46:25 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:34.740 10:46:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.740 Malloc0 00:04:34.740 10:46:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.740 Malloc1 00:04:34.740 10:46:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.740 10:46:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.740 /dev/nbd0 00:04:34.740 10:46:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.740 10:46:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.740 1+0 records in 00:04:34.740 1+0 records out 00:04:34.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311305 s, 13.2 MB/s 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:34.740 10:46:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:34.740 10:46:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.740 10:46:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.740 10:46:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.001 /dev/nbd1 00:04:35.001 10:46:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.001 10:46:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.001 1+0 records in 00:04:35.001 1+0 records out 00:04:35.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239354 s, 17.1 MB/s 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:35.001 10:46:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:35.001 10:46:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.001 10:46:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.001 10:46:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.001 10:46:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.001 10:46:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.261 { 00:04:35.261 "nbd_device": "/dev/nbd0", 00:04:35.261 "bdev_name": "Malloc0" 00:04:35.261 }, 00:04:35.261 { 00:04:35.261 "nbd_device": "/dev/nbd1", 00:04:35.261 "bdev_name": "Malloc1" 00:04:35.261 } 00:04:35.261 ]' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.261 { 00:04:35.261 "nbd_device": "/dev/nbd0", 00:04:35.261 "bdev_name": "Malloc0" 00:04:35.261 }, 00:04:35.261 { 00:04:35.261 "nbd_device": "/dev/nbd1", 00:04:35.261 "bdev_name": "Malloc1" 00:04:35.261 } 00:04:35.261 ]' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.261 /dev/nbd1' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.261 /dev/nbd1' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.261 256+0 records in 00:04:35.261 256+0 records out 00:04:35.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127306 s, 82.4 MB/s 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.261 256+0 records in 00:04:35.261 256+0 records out 00:04:35.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164119 s, 63.9 MB/s 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.261 256+0 records in 00:04:35.261 256+0 records out 00:04:35.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194506 s, 53.9 MB/s 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.261 10:46:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.262 10:46:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.522 10:46:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.783 10:46:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.044 10:46:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.044 10:46:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.304 10:46:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.304 [2024-11-06 10:46:27.589716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.305 [2024-11-06 10:46:27.624576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.305 [2024-11-06 10:46:27.624579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.305 [2024-11-06 10:46:27.657002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.305 [2024-11-06 10:46:27.657036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.753 10:46:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.753 10:46:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:39.753 spdk_app_start Round 2 00:04:39.753 10:46:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3022869 /var/tmp/spdk-nbd.sock 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3022869 ']' 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.753 10:46:30 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:39.753 10:46:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.753 Malloc0 00:04:39.753 10:46:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.753 Malloc1 00:04:39.753 10:46:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.753 10:46:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.014 /dev/nbd0 00:04:40.014 10:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.014 10:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.014 1+0 records in 00:04:40.014 1+0 records out 00:04:40.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271588 s, 15.1 MB/s 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:40.014 10:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.014 10:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.014 10:46:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.014 /dev/nbd1 00:04:40.014 10:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.014 10:46:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:40.014 10:46:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.275 1+0 records in 00:04:40.275 1+0 records out 00:04:40.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246944 s, 16.6 MB/s 00:04:40.275 10:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.275 10:46:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:40.275 10:46:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.275 10:46:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:40.275 10:46:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.275 { 00:04:40.275 "nbd_device": "/dev/nbd0", 00:04:40.275 "bdev_name": "Malloc0" 00:04:40.275 }, 00:04:40.275 { 00:04:40.275 "nbd_device": "/dev/nbd1", 00:04:40.275 "bdev_name": "Malloc1" 00:04:40.275 } 00:04:40.275 ]' 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.275 { 00:04:40.275 "nbd_device": "/dev/nbd0", 00:04:40.275 "bdev_name": "Malloc0" 00:04:40.275 }, 00:04:40.275 { 00:04:40.275 "nbd_device": "/dev/nbd1", 00:04:40.275 "bdev_name": "Malloc1" 00:04:40.275 } 00:04:40.275 ]' 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.275 /dev/nbd1' 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.275 /dev/nbd1' 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.275 10:46:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.276 256+0 records in 00:04:40.276 256+0 records out 00:04:40.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118987 s, 88.1 MB/s 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.276 10:46:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.537 256+0 records in 00:04:40.537 256+0 records out 00:04:40.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167005 s, 62.8 MB/s 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.537 256+0 records in 00:04:40.537 256+0 records out 00:04:40.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173335 s, 60.5 MB/s 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.537 10:46:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.798 10:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.060 10:46:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.060 10:46:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.320 10:46:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.321 [2024-11-06 10:46:32.614705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.321 [2024-11-06 10:46:32.649664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.321 [2024-11-06 10:46:32.649667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.321 [2024-11-06 10:46:32.681439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.321 [2024-11-06 10:46:32.681479] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.621 10:46:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3022869 /var/tmp/spdk-nbd.sock 00:04:44.621 10:46:35 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3022869 ']' 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:44.622 10:46:35 event.app_repeat -- event/event.sh@39 -- # killprocess 3022869 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3022869 ']' 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3022869 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3022869 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3022869' 00:04:44.622 killing process with pid 3022869 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3022869 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3022869 00:04:44.622 spdk_app_start is called in Round 0. 00:04:44.622 Shutdown signal received, stop current app iteration 00:04:44.622 Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 reinitialization... 00:04:44.622 spdk_app_start is called in Round 1. 00:04:44.622 Shutdown signal received, stop current app iteration 00:04:44.622 Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 reinitialization... 00:04:44.622 spdk_app_start is called in Round 2. 00:04:44.622 Shutdown signal received, stop current app iteration 00:04:44.622 Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 reinitialization... 00:04:44.622 spdk_app_start is called in Round 3. 00:04:44.622 Shutdown signal received, stop current app iteration 00:04:44.622 10:46:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:44.622 10:46:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:44.622 00:04:44.622 real 0m15.522s 00:04:44.622 user 0m33.895s 00:04:44.622 sys 0m2.183s 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.622 10:46:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.622 ************************************ 00:04:44.622 END TEST app_repeat 00:04:44.622 ************************************ 00:04:44.622 10:46:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:44.622 10:46:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:44.622 10:46:35 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.622 10:46:35 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.622 10:46:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.622 ************************************ 00:04:44.622 START TEST cpu_locks 00:04:44.622 ************************************ 00:04:44.622 10:46:35 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:44.622 * Looking for test storage... 00:04:44.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:44.622 10:46:36 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:44.622 10:46:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:44.622 10:46:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.883 10:46:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.883 10:46:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:44.883 10:46:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.883 10:46:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.883 --rc genhtml_branch_coverage=1 00:04:44.883 --rc genhtml_function_coverage=1 00:04:44.883 --rc genhtml_legend=1 00:04:44.883 --rc geninfo_all_blocks=1 00:04:44.883 --rc geninfo_unexecuted_blocks=1 00:04:44.883 00:04:44.883 ' 00:04:44.883 10:46:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.883 --rc genhtml_branch_coverage=1 00:04:44.883 --rc genhtml_function_coverage=1 00:04:44.883 --rc genhtml_legend=1 00:04:44.883 --rc geninfo_all_blocks=1 00:04:44.883 --rc geninfo_unexecuted_blocks=1 00:04:44.883 00:04:44.883 ' 00:04:44.884 10:46:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.884 --rc genhtml_branch_coverage=1 00:04:44.884 --rc genhtml_function_coverage=1 00:04:44.884 --rc genhtml_legend=1 00:04:44.884 --rc geninfo_all_blocks=1 00:04:44.884 --rc geninfo_unexecuted_blocks=1 00:04:44.884 00:04:44.884 ' 00:04:44.884 10:46:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.884 --rc genhtml_branch_coverage=1 00:04:44.884 --rc genhtml_function_coverage=1 00:04:44.884 --rc genhtml_legend=1 00:04:44.884 --rc geninfo_all_blocks=1 00:04:44.884 --rc geninfo_unexecuted_blocks=1 00:04:44.884 00:04:44.884 ' 00:04:44.884 10:46:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:44.884 10:46:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:44.884 10:46:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:44.884 10:46:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:44.884 10:46:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.884 10:46:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.884 10:46:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 ************************************ 00:04:44.884 START TEST default_locks 00:04:44.884 ************************************ 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3026436 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3026436 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3026436 ']' 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.884 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 [2024-11-06 10:46:36.222152] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:44.884 [2024-11-06 10:46:36.222220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026436 ] 00:04:44.884 [2024-11-06 10:46:36.297043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.145 [2024-11-06 10:46:36.339352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.717 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.717 10:46:36 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:45.717 10:46:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3026436 00:04:45.717 10:46:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3026436 00:04:45.717 10:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.978 lslocks: write error 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3026436 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3026436 ']' 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3026436 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3026436 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3026436' 00:04:45.978 killing process with pid 3026436 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3026436 00:04:45.978 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3026436 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3026436 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3026436 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3026436 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3026436 ']' 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3026436) - No such process 00:04:46.240 ERROR: process (pid: 3026436) is no longer running 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:46.240 00:04:46.240 real 0m1.273s 00:04:46.240 user 0m1.378s 00:04:46.240 sys 0m0.411s 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.240 10:46:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.240 ************************************ 00:04:46.240 END TEST default_locks 00:04:46.240 ************************************ 00:04:46.240 10:46:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:46.240 10:46:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.240 10:46:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.240 10:46:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.240 ************************************ 00:04:46.240 START TEST default_locks_via_rpc 00:04:46.240 ************************************ 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3026686 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3026686 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3026686 ']' 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.240 10:46:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.240 [2024-11-06 10:46:37.572664] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:46.240 [2024-11-06 10:46:37.572721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026686 ] 00:04:46.240 [2024-11-06 10:46:37.646589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.502 [2024-11-06 10:46:37.687902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3026686 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3026686 00:04:47.073 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3026686 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3026686 ']' 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3026686 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3026686 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3026686' 00:04:47.644 killing process with pid 3026686 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3026686 00:04:47.644 10:46:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3026686 00:04:47.904 00:04:47.904 real 0m1.574s 00:04:47.904 user 0m1.703s 00:04:47.904 sys 0m0.519s 00:04:47.904 10:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.904 10:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.904 ************************************ 00:04:47.904 END TEST default_locks_via_rpc 00:04:47.904 ************************************ 00:04:47.904 10:46:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:47.904 10:46:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.904 10:46:39 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.904 10:46:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.904 ************************************ 00:04:47.904 START TEST non_locking_app_on_locked_coremask 00:04:47.904 ************************************ 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3027028 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3027028 /var/tmp/spdk.sock 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3027028 ']' 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.904 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.905 [2024-11-06 10:46:39.211284] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:47.905 [2024-11-06 10:46:39.211339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027028 ] 00:04:47.905 [2024-11-06 10:46:39.285128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.166 [2024-11-06 10:46:39.326164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3027183 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3027183 /var/tmp/spdk2.sock 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3027183 ']' 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.735 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.736 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.736 10:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.736 [2024-11-06 10:46:40.064799] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:48.736 [2024-11-06 10:46:40.064866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027183 ] 00:04:48.996 [2024-11-06 10:46:40.176467] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.996 [2024-11-06 10:46:40.176499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.996 [2024-11-06 10:46:40.248677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.566 10:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.566 10:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:49.566 10:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3027028 00:04:49.566 10:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3027028 00:04:49.566 10:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.208 lslocks: write error 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3027028 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3027028 ']' 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3027028 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3027028 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.208 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3027028' 00:04:50.208 killing process with pid 3027028 00:04:50.209 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3027028 00:04:50.209 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3027028 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3027183 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3027183 ']' 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3027183 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3027183 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3027183' 00:04:50.781 killing process with pid 3027183 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3027183 00:04:50.781 10:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3027183 00:04:50.781 00:04:50.781 real 0m3.034s 00:04:50.781 user 0m3.374s 00:04:50.781 sys 0m0.920s 00:04:50.781 10:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.781 10:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.781 ************************************ 00:04:50.781 END TEST non_locking_app_on_locked_coremask 00:04:50.781 ************************************ 00:04:51.042 10:46:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:51.042 10:46:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.042 10:46:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.042 10:46:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.042 ************************************ 00:04:51.042 START TEST locking_app_on_unlocked_coremask 00:04:51.042 ************************************ 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3027618 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3027618 /var/tmp/spdk.sock 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3027618 ']' 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.042 10:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.042 [2024-11-06 10:46:42.331262] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:51.042 [2024-11-06 10:46:42.331315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027618 ] 00:04:51.042 [2024-11-06 10:46:42.402422] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.042 [2024-11-06 10:46:42.402451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.042 [2024-11-06 10:46:42.438567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3027895 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3027895 /var/tmp/spdk2.sock 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3027895 ']' 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.983 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.983 [2024-11-06 10:46:43.174578] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:51.983 [2024-11-06 10:46:43.174632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027895 ] 00:04:51.983 [2024-11-06 10:46:43.288103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.983 [2024-11-06 10:46:43.360349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.555 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.555 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:52.555 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3027895 00:04:52.555 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3027895 00:04:52.555 10:46:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.128 lslocks: write error 00:04:53.128 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3027618 00:04:53.128 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3027618 ']' 00:04:53.128 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3027618 00:04:53.128 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:53.128 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.128 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3027618 00:04:53.388 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.388 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.388 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3027618' 00:04:53.388 killing process with pid 3027618 00:04:53.388 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3027618 00:04:53.388 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3027618 00:04:53.649 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3027895 00:04:53.649 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3027895 ']' 00:04:53.649 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3027895 00:04:53.649 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:53.649 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.649 10:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3027895 00:04:53.649 10:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.649 10:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.649 10:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3027895' 00:04:53.649 killing process with pid 3027895 00:04:53.649 10:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3027895 00:04:53.649 10:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3027895 00:04:53.909 00:04:53.909 real 0m2.966s 00:04:53.909 user 0m3.294s 00:04:53.909 sys 0m0.894s 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.909 ************************************ 00:04:53.909 END TEST locking_app_on_unlocked_coremask 00:04:53.909 ************************************ 00:04:53.909 10:46:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:53.909 10:46:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.909 10:46:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.909 10:46:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.909 ************************************ 00:04:53.909 START TEST locking_app_on_locked_coremask 00:04:53.909 ************************************ 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3028269 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3028269 /var/tmp/spdk.sock 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3028269 ']' 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.909 10:46:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.171 [2024-11-06 10:46:45.376383] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:54.171 [2024-11-06 10:46:45.376435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028269 ] 00:04:54.171 [2024-11-06 10:46:45.450580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.171 [2024-11-06 10:46:45.486229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.743 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.743 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:54.743 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3028603 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3028603 /var/tmp/spdk2.sock 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3028603 /var/tmp/spdk2.sock 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3028603 /var/tmp/spdk2.sock 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3028603 ']' 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.003 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.003 [2024-11-06 10:46:46.230492] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:55.003 [2024-11-06 10:46:46.230547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028603 ] 00:04:55.003 [2024-11-06 10:46:46.340507] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3028269 has claimed it. 00:04:55.003 [2024-11-06 10:46:46.340551] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3028603) - No such process 00:04:55.575 ERROR: process (pid: 3028603) is no longer running 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3028269 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3028269 00:04:55.575 10:46:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.836 lslocks: write error 00:04:55.836 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3028269 00:04:55.836 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3028269 ']' 00:04:55.836 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3028269 00:04:55.836 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:55.836 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.836 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3028269 00:04:56.097 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.097 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.097 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3028269' 00:04:56.097 killing process with pid 3028269 00:04:56.097 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3028269 00:04:56.097 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3028269 00:04:56.097 00:04:56.097 real 0m2.194s 00:04:56.097 user 0m2.479s 00:04:56.097 sys 0m0.611s 00:04:56.097 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.097 10:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.097 ************************************ 00:04:56.097 END TEST locking_app_on_locked_coremask 00:04:56.097 ************************************ 00:04:56.358 10:46:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:56.358 10:46:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.358 10:46:47 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.358 10:46:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.358 ************************************ 00:04:56.358 START TEST locking_overlapped_coremask 00:04:56.358 ************************************ 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3028864 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3028864 /var/tmp/spdk.sock 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3028864 ']' 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.358 10:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.358 [2024-11-06 10:46:47.632583] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:56.358 [2024-11-06 10:46:47.632635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028864 ] 00:04:56.358 [2024-11-06 10:46:47.705790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.358 [2024-11-06 10:46:47.747420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.358 [2024-11-06 10:46:47.747554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.358 [2024-11-06 10:46:47.747557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3028982 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3028982 /var/tmp/spdk2.sock 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3028982 /var/tmp/spdk2.sock 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3028982 /var/tmp/spdk2.sock 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3028982 ']' 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.300 10:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.300 [2024-11-06 10:46:48.494333] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:57.300 [2024-11-06 10:46:48.494388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028982 ] 00:04:57.300 [2024-11-06 10:46:48.582418] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3028864 has claimed it. 00:04:57.300 [2024-11-06 10:46:48.582449] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:57.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3028982) - No such process 00:04:57.871 ERROR: process (pid: 3028982) is no longer running 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3028864 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3028864 ']' 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3028864 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3028864 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3028864' 00:04:57.871 killing process with pid 3028864 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3028864 00:04:57.871 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3028864 00:04:58.132 00:04:58.132 real 0m1.809s 00:04:58.132 user 0m5.288s 00:04:58.132 sys 0m0.350s 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.132 ************************************ 00:04:58.132 END TEST locking_overlapped_coremask 00:04:58.132 ************************************ 00:04:58.132 10:46:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.132 10:46:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.132 10:46:49 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.132 10:46:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.132 ************************************ 00:04:58.132 START TEST locking_overlapped_coremask_via_rpc 00:04:58.132 ************************************ 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3029342 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3029342 /var/tmp/spdk.sock 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3029342 ']' 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.132 10:46:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.132 [2024-11-06 10:46:49.516160] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:58.132 [2024-11-06 10:46:49.516211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029342 ] 00:04:58.393 [2024-11-06 10:46:49.588397] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.393 [2024-11-06 10:46:49.588428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.393 [2024-11-06 10:46:49.629667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.393 [2024-11-06 10:46:49.629795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.393 [2024-11-06 10:46:49.630008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3029362 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3029362 /var/tmp/spdk2.sock 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3029362 ']' 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.963 10:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.963 [2024-11-06 10:46:50.374141] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:04:58.963 [2024-11-06 10:46:50.374195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029362 ] 00:04:59.224 [2024-11-06 10:46:50.462872] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.224 [2024-11-06 10:46:50.462896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.224 [2024-11-06 10:46:50.525819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.224 [2024-11-06 10:46:50.525873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.224 [2024-11-06 10:46:50.525875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.794 [2024-11-06 10:46:51.178815] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3029342 has claimed it. 00:04:59.794 request: 00:04:59.794 { 00:04:59.794 "method": "framework_enable_cpumask_locks", 00:04:59.794 "req_id": 1 00:04:59.794 } 00:04:59.794 Got JSON-RPC error response 00:04:59.794 response: 00:04:59.794 { 00:04:59.794 "code": -32603, 00:04:59.794 "message": "Failed to claim CPU core: 2" 00:04:59.794 } 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3029342 /var/tmp/spdk.sock 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3029342 ']' 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.794 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3029362 /var/tmp/spdk2.sock 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3029362 ']' 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.054 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:00.314 00:05:00.314 real 0m2.088s 00:05:00.314 user 0m0.851s 00:05:00.314 sys 0m0.167s 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.314 10:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.314 ************************************ 00:05:00.314 END TEST locking_overlapped_coremask_via_rpc 00:05:00.314 ************************************ 00:05:00.314 10:46:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:00.314 10:46:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3029342 ]] 00:05:00.314 10:46:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3029342 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3029342 ']' 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3029342 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3029342 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3029342' 00:05:00.314 killing process with pid 3029342 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3029342 00:05:00.314 10:46:51 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3029342 00:05:00.574 10:46:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3029362 ]] 00:05:00.574 10:46:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3029362 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3029362 ']' 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3029362 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3029362 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3029362' 00:05:00.574 killing process with pid 3029362 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3029362 00:05:00.574 10:46:51 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3029362 00:05:00.835 10:46:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:00.835 10:46:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:00.835 10:46:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3029342 ]] 00:05:00.835 10:46:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3029342 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3029342 ']' 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3029342 00:05:00.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3029342) - No such process 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3029342 is not found' 00:05:00.835 Process with pid 3029342 is not found 00:05:00.835 10:46:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3029362 ]] 00:05:00.835 10:46:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3029362 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3029362 ']' 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3029362 00:05:00.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3029362) - No such process 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3029362 is not found' 00:05:00.835 Process with pid 3029362 is not found 00:05:00.835 10:46:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:00.835 00:05:00.835 real 0m16.215s 00:05:00.835 user 0m28.617s 00:05:00.835 sys 0m4.792s 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.835 10:46:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 END TEST cpu_locks 00:05:00.835 ************************************ 00:05:00.835 00:05:00.835 real 0m41.145s 00:05:00.835 user 1m20.174s 00:05:00.835 sys 0m7.979s 00:05:00.835 10:46:52 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.835 10:46:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 END TEST event 00:05:00.835 ************************************ 00:05:00.835 10:46:52 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:00.835 10:46:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.835 10:46:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.835 10:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 START TEST thread 00:05:00.835 ************************************ 00:05:00.835 10:46:52 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:01.095 * Looking for test storage... 00:05:01.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:01.095 10:46:52 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.095 10:46:52 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.095 10:46:52 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.095 10:46:52 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.095 10:46:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.095 10:46:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.095 10:46:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.095 10:46:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.095 10:46:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.095 10:46:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.095 10:46:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.095 10:46:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.095 10:46:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.095 10:46:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.095 10:46:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.095 10:46:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:01.095 10:46:52 thread -- scripts/common.sh@345 -- # : 1 00:05:01.095 10:46:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.095 10:46:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.095 10:46:52 thread -- scripts/common.sh@365 -- # decimal 1 00:05:01.095 10:46:52 thread -- scripts/common.sh@353 -- # local d=1 00:05:01.095 10:46:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.095 10:46:52 thread -- scripts/common.sh@355 -- # echo 1 00:05:01.095 10:46:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.095 10:46:52 thread -- scripts/common.sh@366 -- # decimal 2 00:05:01.095 10:46:52 thread -- scripts/common.sh@353 -- # local d=2 00:05:01.095 10:46:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.095 10:46:52 thread -- scripts/common.sh@355 -- # echo 2 00:05:01.095 10:46:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.096 10:46:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.096 10:46:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.096 10:46:52 thread -- scripts/common.sh@368 -- # return 0 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.096 --rc genhtml_branch_coverage=1 00:05:01.096 --rc genhtml_function_coverage=1 00:05:01.096 --rc genhtml_legend=1 00:05:01.096 --rc geninfo_all_blocks=1 00:05:01.096 --rc geninfo_unexecuted_blocks=1 00:05:01.096 00:05:01.096 ' 00:05:01.096 10:46:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.096 10:46:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.096 ************************************ 00:05:01.096 START TEST thread_poller_perf 00:05:01.096 ************************************ 00:05:01.096 10:46:52 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.096 [2024-11-06 10:46:52.509553] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:05:01.096 [2024-11-06 10:46:52.509658] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029946 ] 00:05:01.356 [2024-11-06 10:46:52.594690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.356 [2024-11-06 10:46:52.631703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.356 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:02.295 [2024-11-06T09:46:53.717Z] ====================================== 00:05:02.295 [2024-11-06T09:46:53.717Z] busy:2408222618 (cyc) 00:05:02.295 [2024-11-06T09:46:53.717Z] total_run_count: 287000 00:05:02.295 [2024-11-06T09:46:53.717Z] tsc_hz: 2400000000 (cyc) 00:05:02.295 [2024-11-06T09:46:53.717Z] ====================================== 00:05:02.295 [2024-11-06T09:46:53.717Z] poller_cost: 8391 (cyc), 3496 (nsec) 00:05:02.295 00:05:02.295 real 0m1.184s 00:05:02.295 user 0m1.114s 00:05:02.295 sys 0m0.066s 00:05:02.295 10:46:53 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.295 10:46:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.295 ************************************ 00:05:02.295 END TEST thread_poller_perf 00:05:02.295 ************************************ 00:05:02.295 10:46:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.295 10:46:53 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:02.295 10:46:53 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.295 10:46:53 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.555 ************************************ 00:05:02.555 START TEST thread_poller_perf 00:05:02.555 ************************************ 00:05:02.555 10:46:53 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.555 [2024-11-06 10:46:53.769293] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:05:02.555 [2024-11-06 10:46:53.769406] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030163 ] 00:05:02.555 [2024-11-06 10:46:53.843884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.555 [2024-11-06 10:46:53.878184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.555 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:03.498 [2024-11-06T09:46:54.920Z] ====================================== 00:05:03.498 [2024-11-06T09:46:54.920Z] busy:2401957044 (cyc) 00:05:03.498 [2024-11-06T09:46:54.920Z] total_run_count: 3813000 00:05:03.498 [2024-11-06T09:46:54.920Z] tsc_hz: 2400000000 (cyc) 00:05:03.498 [2024-11-06T09:46:54.920Z] ====================================== 00:05:03.498 [2024-11-06T09:46:54.920Z] poller_cost: 629 (cyc), 262 (nsec) 00:05:03.498 00:05:03.498 real 0m1.164s 00:05:03.498 user 0m1.095s 00:05:03.498 sys 0m0.065s 00:05:03.498 10:46:54 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.498 10:46:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.498 ************************************ 00:05:03.498 END TEST thread_poller_perf 00:05:03.498 ************************************ 00:05:03.759 10:46:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:03.759 00:05:03.759 real 0m2.702s 00:05:03.759 user 0m2.393s 00:05:03.759 sys 0m0.325s 00:05:03.759 10:46:54 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.759 10:46:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.759 ************************************ 00:05:03.759 END TEST thread 00:05:03.759 ************************************ 00:05:03.759 10:46:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:03.759 10:46:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:03.759 10:46:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.759 10:46:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.759 10:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.759 ************************************ 00:05:03.759 START TEST app_cmdline 00:05:03.759 ************************************ 00:05:03.759 10:46:55 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:03.759 * Looking for test storage... 00:05:03.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:03.759 10:46:55 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.759 10:46:55 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.759 10:46:55 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.759 10:46:55 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.759 10:46:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.020 10:46:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.020 --rc genhtml_branch_coverage=1 00:05:04.020 --rc genhtml_function_coverage=1 00:05:04.020 --rc genhtml_legend=1 00:05:04.020 --rc geninfo_all_blocks=1 00:05:04.020 --rc geninfo_unexecuted_blocks=1 00:05:04.020 00:05:04.020 ' 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.020 --rc genhtml_branch_coverage=1 00:05:04.020 --rc genhtml_function_coverage=1 00:05:04.020 --rc genhtml_legend=1 00:05:04.020 --rc geninfo_all_blocks=1 00:05:04.020 --rc geninfo_unexecuted_blocks=1 00:05:04.020 00:05:04.020 ' 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.020 --rc genhtml_branch_coverage=1 00:05:04.020 --rc genhtml_function_coverage=1 00:05:04.020 --rc genhtml_legend=1 00:05:04.020 --rc geninfo_all_blocks=1 00:05:04.020 --rc geninfo_unexecuted_blocks=1 00:05:04.020 00:05:04.020 ' 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.020 --rc genhtml_branch_coverage=1 00:05:04.020 --rc genhtml_function_coverage=1 00:05:04.020 --rc genhtml_legend=1 00:05:04.020 --rc geninfo_all_blocks=1 00:05:04.020 --rc geninfo_unexecuted_blocks=1 00:05:04.020 00:05:04.020 ' 00:05:04.020 10:46:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:04.020 10:46:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3030560 00:05:04.020 10:46:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3030560 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3030560 ']' 00:05:04.020 10:46:55 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.020 10:46:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:04.020 [2024-11-06 10:46:55.269198] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:05:04.020 [2024-11-06 10:46:55.269270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030560 ] 00:05:04.020 [2024-11-06 10:46:55.344573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.020 [2024-11-06 10:46:55.387694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:04.962 { 00:05:04.962 "version": "SPDK v25.01-pre git sha1 f0e4b91ff", 00:05:04.962 "fields": { 00:05:04.962 "major": 25, 00:05:04.962 "minor": 1, 00:05:04.962 "patch": 0, 00:05:04.962 "suffix": "-pre", 00:05:04.962 "commit": "f0e4b91ff" 00:05:04.962 } 00:05:04.962 } 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:04.962 10:46:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.962 10:46:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.963 10:46:56 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.963 10:46:56 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:04.963 10:46:56 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:05.224 request: 00:05:05.225 { 00:05:05.225 "method": "env_dpdk_get_mem_stats", 00:05:05.225 "req_id": 1 00:05:05.225 } 00:05:05.225 Got JSON-RPC error response 00:05:05.225 response: 00:05:05.225 { 00:05:05.225 "code": -32601, 00:05:05.225 "message": "Method not found" 00:05:05.225 } 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.225 10:46:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3030560 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3030560 ']' 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3030560 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3030560 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3030560' 00:05:05.225 killing process with pid 3030560 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@971 -- # kill 3030560 00:05:05.225 10:46:56 app_cmdline -- common/autotest_common.sh@976 -- # wait 3030560 00:05:05.486 00:05:05.486 real 0m1.669s 00:05:05.486 user 0m2.005s 00:05:05.486 sys 0m0.443s 00:05:05.486 10:46:56 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.486 10:46:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:05.486 ************************************ 00:05:05.486 END TEST app_cmdline 00:05:05.486 ************************************ 00:05:05.486 10:46:56 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:05.486 10:46:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.486 10:46:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.486 10:46:56 -- common/autotest_common.sh@10 -- # set +x 00:05:05.486 ************************************ 00:05:05.486 START TEST version 00:05:05.486 ************************************ 00:05:05.486 10:46:56 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:05.486 * Looking for test storage... 00:05:05.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:05.486 10:46:56 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.486 10:46:56 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.486 10:46:56 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.747 10:46:56 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.747 10:46:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.747 10:46:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.747 10:46:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.747 10:46:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.747 10:46:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.747 10:46:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.747 10:46:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.747 10:46:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.747 10:46:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.747 10:46:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.747 10:46:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.747 10:46:56 version -- scripts/common.sh@344 -- # case "$op" in 00:05:05.747 10:46:56 version -- scripts/common.sh@345 -- # : 1 00:05:05.747 10:46:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.747 10:46:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.747 10:46:56 version -- scripts/common.sh@365 -- # decimal 1 00:05:05.747 10:46:56 version -- scripts/common.sh@353 -- # local d=1 00:05:05.747 10:46:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.747 10:46:56 version -- scripts/common.sh@355 -- # echo 1 00:05:05.747 10:46:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.747 10:46:56 version -- scripts/common.sh@366 -- # decimal 2 00:05:05.747 10:46:56 version -- scripts/common.sh@353 -- # local d=2 00:05:05.747 10:46:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.747 10:46:56 version -- scripts/common.sh@355 -- # echo 2 00:05:05.747 10:46:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.747 10:46:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.748 10:46:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.748 10:46:56 version -- scripts/common.sh@368 -- # return 0 00:05:05.748 10:46:56 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.748 10:46:56 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.748 --rc genhtml_branch_coverage=1 00:05:05.748 --rc genhtml_function_coverage=1 00:05:05.748 --rc genhtml_legend=1 00:05:05.748 --rc geninfo_all_blocks=1 00:05:05.748 --rc geninfo_unexecuted_blocks=1 00:05:05.748 00:05:05.748 ' 00:05:05.748 10:46:56 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.748 --rc genhtml_branch_coverage=1 00:05:05.748 --rc genhtml_function_coverage=1 00:05:05.748 --rc genhtml_legend=1 00:05:05.748 --rc geninfo_all_blocks=1 00:05:05.748 --rc geninfo_unexecuted_blocks=1 00:05:05.748 00:05:05.748 ' 00:05:05.748 10:46:56 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.748 --rc genhtml_branch_coverage=1 00:05:05.748 --rc genhtml_function_coverage=1 00:05:05.748 --rc genhtml_legend=1 00:05:05.748 --rc geninfo_all_blocks=1 00:05:05.748 --rc geninfo_unexecuted_blocks=1 00:05:05.748 00:05:05.748 ' 00:05:05.748 10:46:56 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.748 --rc genhtml_branch_coverage=1 00:05:05.748 --rc genhtml_function_coverage=1 00:05:05.748 --rc genhtml_legend=1 00:05:05.748 --rc geninfo_all_blocks=1 00:05:05.748 --rc geninfo_unexecuted_blocks=1 00:05:05.748 00:05:05.748 ' 00:05:05.748 10:46:56 version -- app/version.sh@17 -- # get_header_version major 00:05:05.748 10:46:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.748 10:46:56 version -- app/version.sh@14 -- # cut -f2 00:05:05.748 10:46:56 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.748 10:46:56 version -- app/version.sh@17 -- # major=25 00:05:05.748 10:46:56 version -- app/version.sh@18 -- # get_header_version minor 00:05:05.748 10:46:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.748 10:46:56 version -- app/version.sh@14 -- # cut -f2 00:05:05.748 10:46:56 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.748 10:46:56 version -- app/version.sh@18 -- # minor=1 00:05:05.748 10:46:56 version -- app/version.sh@19 -- # get_header_version patch 00:05:05.748 10:46:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.748 10:46:56 version -- app/version.sh@14 -- # cut -f2 00:05:05.748 10:46:56 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.748 10:46:57 version -- app/version.sh@19 -- # patch=0 00:05:05.748 10:46:57 version -- app/version.sh@20 -- # get_header_version suffix 00:05:05.748 10:46:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.748 10:46:57 version -- app/version.sh@14 -- # cut -f2 00:05:05.748 10:46:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.748 10:46:57 version -- app/version.sh@20 -- # suffix=-pre 00:05:05.748 10:46:57 version -- app/version.sh@22 -- # version=25.1 00:05:05.748 10:46:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:05.748 10:46:57 version -- app/version.sh@28 -- # version=25.1rc0 00:05:05.748 10:46:57 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:05.748 10:46:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:05.748 10:46:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:05.748 10:46:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:05.748 00:05:05.748 real 0m0.278s 00:05:05.748 user 0m0.171s 00:05:05.748 sys 0m0.152s 00:05:05.748 10:46:57 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.748 10:46:57 version -- common/autotest_common.sh@10 -- # set +x 00:05:05.748 ************************************ 00:05:05.748 END TEST version 00:05:05.748 ************************************ 00:05:05.748 10:46:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:05.748 10:46:57 -- spdk/autotest.sh@194 -- # uname -s 00:05:05.748 10:46:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:05.748 10:46:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:05.748 10:46:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:05.748 10:46:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:05.748 10:46:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.748 10:46:57 -- common/autotest_common.sh@10 -- # set +x 00:05:05.748 10:46:57 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:05.748 10:46:57 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:05.748 10:46:57 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:05.748 10:46:57 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:05.748 10:46:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.748 10:46:57 -- common/autotest_common.sh@10 -- # set +x 00:05:06.010 ************************************ 00:05:06.010 START TEST nvmf_tcp 00:05:06.010 ************************************ 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:06.010 * Looking for test storage... 00:05:06.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.010 10:46:57 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.010 --rc genhtml_branch_coverage=1 00:05:06.010 --rc genhtml_function_coverage=1 00:05:06.010 --rc genhtml_legend=1 00:05:06.010 --rc geninfo_all_blocks=1 00:05:06.010 --rc geninfo_unexecuted_blocks=1 00:05:06.010 00:05:06.010 ' 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.010 --rc genhtml_branch_coverage=1 00:05:06.010 --rc genhtml_function_coverage=1 00:05:06.010 --rc genhtml_legend=1 00:05:06.010 --rc geninfo_all_blocks=1 00:05:06.010 --rc geninfo_unexecuted_blocks=1 00:05:06.010 00:05:06.010 ' 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.010 --rc genhtml_branch_coverage=1 00:05:06.010 --rc genhtml_function_coverage=1 00:05:06.010 --rc genhtml_legend=1 00:05:06.010 --rc geninfo_all_blocks=1 00:05:06.010 --rc geninfo_unexecuted_blocks=1 00:05:06.010 00:05:06.010 ' 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.010 --rc genhtml_branch_coverage=1 00:05:06.010 --rc genhtml_function_coverage=1 00:05:06.010 --rc genhtml_legend=1 00:05:06.010 --rc geninfo_all_blocks=1 00:05:06.010 --rc geninfo_unexecuted_blocks=1 00:05:06.010 00:05:06.010 ' 00:05:06.010 10:46:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:06.010 10:46:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:06.010 10:46:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.010 10:46:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.010 ************************************ 00:05:06.011 START TEST nvmf_target_core 00:05:06.011 ************************************ 00:05:06.011 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:06.272 * Looking for test storage... 00:05:06.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.272 --rc genhtml_branch_coverage=1 00:05:06.272 --rc genhtml_function_coverage=1 00:05:06.272 --rc genhtml_legend=1 00:05:06.272 --rc geninfo_all_blocks=1 00:05:06.272 --rc geninfo_unexecuted_blocks=1 00:05:06.272 00:05:06.272 ' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.272 --rc genhtml_branch_coverage=1 00:05:06.272 --rc genhtml_function_coverage=1 00:05:06.272 --rc genhtml_legend=1 00:05:06.272 --rc geninfo_all_blocks=1 00:05:06.272 --rc geninfo_unexecuted_blocks=1 00:05:06.272 00:05:06.272 ' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.272 --rc genhtml_branch_coverage=1 00:05:06.272 --rc genhtml_function_coverage=1 00:05:06.272 --rc genhtml_legend=1 00:05:06.272 --rc geninfo_all_blocks=1 00:05:06.272 --rc geninfo_unexecuted_blocks=1 00:05:06.272 00:05:06.272 ' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.272 --rc genhtml_branch_coverage=1 00:05:06.272 --rc genhtml_function_coverage=1 00:05:06.272 --rc genhtml_legend=1 00:05:06.272 --rc geninfo_all_blocks=1 00:05:06.272 --rc geninfo_unexecuted_blocks=1 00:05:06.272 00:05:06.272 ' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.272 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:06.273 ************************************ 00:05:06.273 START TEST nvmf_abort 00:05:06.273 ************************************ 00:05:06.273 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:06.534 * Looking for test storage... 00:05:06.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:06.534 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.535 --rc genhtml_branch_coverage=1 00:05:06.535 --rc genhtml_function_coverage=1 00:05:06.535 --rc genhtml_legend=1 00:05:06.535 --rc geninfo_all_blocks=1 00:05:06.535 --rc geninfo_unexecuted_blocks=1 00:05:06.535 00:05:06.535 ' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.535 --rc genhtml_branch_coverage=1 00:05:06.535 --rc genhtml_function_coverage=1 00:05:06.535 --rc genhtml_legend=1 00:05:06.535 --rc geninfo_all_blocks=1 00:05:06.535 --rc geninfo_unexecuted_blocks=1 00:05:06.535 00:05:06.535 ' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.535 --rc genhtml_branch_coverage=1 00:05:06.535 --rc genhtml_function_coverage=1 00:05:06.535 --rc genhtml_legend=1 00:05:06.535 --rc geninfo_all_blocks=1 00:05:06.535 --rc geninfo_unexecuted_blocks=1 00:05:06.535 00:05:06.535 ' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.535 --rc genhtml_branch_coverage=1 00:05:06.535 --rc genhtml_function_coverage=1 00:05:06.535 --rc genhtml_legend=1 00:05:06.535 --rc geninfo_all_blocks=1 00:05:06.535 --rc geninfo_unexecuted_blocks=1 00:05:06.535 00:05:06.535 ' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.535 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:06.536 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:14.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:14.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:14.685 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:14.686 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:14.686 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:14.686 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:14.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:14.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:05:14.686 00:05:14.686 --- 10.0.0.2 ping statistics --- 00:05:14.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:14.686 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:14.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:14.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:05:14.686 00:05:14.686 --- 10.0.0.1 ping statistics --- 00:05:14.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:14.686 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3035047 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3035047 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3035047 ']' 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.686 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.686 [2024-11-06 10:47:05.246794] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:05:14.686 [2024-11-06 10:47:05.246855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:14.686 [2024-11-06 10:47:05.342680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.686 [2024-11-06 10:47:05.386718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:14.686 [2024-11-06 10:47:05.386777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:14.686 [2024-11-06 10:47:05.386786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:14.686 [2024-11-06 10:47:05.386793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:14.686 [2024-11-06 10:47:05.386799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:14.686 [2024-11-06 10:47:05.388417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.686 [2024-11-06 10:47:05.388580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.686 [2024-11-06 10:47:05.388581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.686 [2024-11-06 10:47:06.079818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.686 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.947 Malloc0 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.947 Delay0 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.947 [2024-11-06 10:47:06.162834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.947 10:47:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:14.947 [2024-11-06 10:47:06.332892] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:17.492 Initializing NVMe Controllers 00:05:17.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:17.492 controller IO queue size 128 less than required 00:05:17.492 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:17.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:17.492 Initialization complete. Launching workers. 00:05:17.492 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28983 00:05:17.492 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29044, failed to submit 62 00:05:17.492 success 28987, unsuccessful 57, failed 0 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:17.492 rmmod nvme_tcp 00:05:17.492 rmmod nvme_fabrics 00:05:17.492 rmmod nvme_keyring 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3035047 ']' 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3035047 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3035047 ']' 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3035047 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3035047 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3035047' 00:05:17.492 killing process with pid 3035047 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3035047 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3035047 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.492 10:47:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:19.418 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:19.418 00:05:19.418 real 0m13.139s 00:05:19.418 user 0m14.151s 00:05:19.418 sys 0m6.232s 00:05:19.418 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.418 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.418 ************************************ 00:05:19.418 END TEST nvmf_abort 00:05:19.418 ************************************ 00:05:19.680 10:47:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:19.680 10:47:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:19.680 10:47:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.680 10:47:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:19.680 ************************************ 00:05:19.680 START TEST nvmf_ns_hotplug_stress 00:05:19.680 ************************************ 00:05:19.680 10:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:19.680 * Looking for test storage... 00:05:19.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:19.680 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.942 --rc genhtml_branch_coverage=1 00:05:19.942 --rc genhtml_function_coverage=1 00:05:19.942 --rc genhtml_legend=1 00:05:19.942 --rc geninfo_all_blocks=1 00:05:19.942 --rc geninfo_unexecuted_blocks=1 00:05:19.942 00:05:19.942 ' 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.942 --rc genhtml_branch_coverage=1 00:05:19.942 --rc genhtml_function_coverage=1 00:05:19.942 --rc genhtml_legend=1 00:05:19.942 --rc geninfo_all_blocks=1 00:05:19.942 --rc geninfo_unexecuted_blocks=1 00:05:19.942 00:05:19.942 ' 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.942 --rc genhtml_branch_coverage=1 00:05:19.942 --rc genhtml_function_coverage=1 00:05:19.942 --rc genhtml_legend=1 00:05:19.942 --rc geninfo_all_blocks=1 00:05:19.942 --rc geninfo_unexecuted_blocks=1 00:05:19.942 00:05:19.942 ' 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.942 --rc genhtml_branch_coverage=1 00:05:19.942 --rc genhtml_function_coverage=1 00:05:19.942 --rc genhtml_legend=1 00:05:19.942 --rc geninfo_all_blocks=1 00:05:19.942 --rc geninfo_unexecuted_blocks=1 00:05:19.942 00:05:19.942 ' 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.942 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:19.943 10:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.090 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:28.091 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:28.091 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:28.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:28.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:28.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:05:28.091 00:05:28.091 --- 10.0.0.2 ping statistics --- 00:05:28.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.091 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:05:28.091 00:05:28.091 --- 10.0.0.1 ping statistics --- 00:05:28.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.091 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:28.091 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3039932 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3039932 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3039932 ']' 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.092 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.092 [2024-11-06 10:47:18.576987] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:05:28.092 [2024-11-06 10:47:18.577059] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.092 [2024-11-06 10:47:18.677034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.092 [2024-11-06 10:47:18.729743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.092 [2024-11-06 10:47:18.729810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.092 [2024-11-06 10:47:18.729818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.092 [2024-11-06 10:47:18.729826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.092 [2024-11-06 10:47:18.729832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.092 [2024-11-06 10:47:18.731644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.092 [2024-11-06 10:47:18.731811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.092 [2024-11-06 10:47:18.731847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:28.092 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:28.407 [2024-11-06 10:47:19.582997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.407 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:28.407 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.698 [2024-11-06 10:47:19.940465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.698 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:28.964 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:28.964 Malloc0 00:05:28.964 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:29.224 Delay0 00:05:29.224 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.485 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:29.485 NULL1 00:05:29.746 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:29.746 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:29.746 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3040474 00:05:29.746 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:29.746 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.007 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.267 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:30.267 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:30.267 true 00:05:30.267 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:30.267 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.528 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.790 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:30.790 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:30.790 true 00:05:30.790 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:30.790 10:47:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.175 Read completed with error (sct=0, sc=11) 00:05:32.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.175 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.175 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:32.175 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:32.436 true 00:05:32.436 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:32.436 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.378 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.378 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:33.378 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:33.639 true 00:05:33.639 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:33.639 10:47:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.901 10:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.901 10:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:33.901 10:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:34.162 true 00:05:34.162 10:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:34.162 10:47:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.365 10:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.365 10:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:35.365 10:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:35.626 true 00:05:35.626 10:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:35.626 10:47:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.568 10:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.568 10:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:36.568 10:47:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:36.834 true 00:05:36.834 10:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:36.834 10:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.096 10:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.096 10:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:37.096 10:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:37.357 true 00:05:37.357 10:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:37.357 10:47:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.739 10:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.739 10:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:38.739 10:47:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:38.739 true 00:05:38.739 10:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:38.739 10:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.679 10:47:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.938 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:39.938 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:39.938 true 00:05:39.938 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:39.938 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.198 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.459 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:40.459 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:40.459 true 00:05:40.459 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:40.459 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.721 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.980 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:40.980 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:40.980 true 00:05:40.980 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:40.980 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.920 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.180 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:42.180 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:42.180 true 00:05:42.180 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:42.180 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.440 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.701 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:42.701 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:42.701 true 00:05:42.701 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:42.701 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.087 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.087 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:44.087 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:44.349 true 00:05:44.349 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:44.349 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.290 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.290 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:45.290 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:45.551 true 00:05:45.551 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:45.551 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.551 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.812 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:45.812 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:46.072 true 00:05:46.072 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:46.072 10:47:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.013 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.274 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:47.274 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:47.535 true 00:05:47.535 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:47.535 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.475 10:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.475 10:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:48.475 10:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:48.735 true 00:05:48.735 10:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:48.735 10:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.735 10:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.996 10:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:48.996 10:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:49.344 true 00:05:49.344 10:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:49.344 10:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.288 10:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.549 10:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:50.549 10:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:50.549 true 00:05:50.810 10:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:50.810 10:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.751 10:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.751 10:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:51.751 10:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:51.751 true 00:05:52.011 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:52.011 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.011 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.272 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:52.272 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:52.272 true 00:05:52.532 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:52.532 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.532 10:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.792 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:52.792 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:53.052 true 00:05:53.052 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:53.052 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.052 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.311 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:53.311 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:53.572 true 00:05:53.572 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:53.572 10:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.512 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.772 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:54.772 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:55.031 true 00:05:55.031 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:55.031 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.971 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.971 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:55.971 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:56.230 true 00:05:56.230 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:56.230 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.489 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.489 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:56.489 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:56.749 true 00:05:56.749 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:56.749 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.131 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.131 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:58.131 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:58.131 true 00:05:58.390 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:58.390 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.960 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.220 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:59.220 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:59.481 true 00:05:59.481 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:05:59.481 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.741 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.741 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:59.741 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:00.002 true 00:06:00.002 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:06:00.002 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.263 Initializing NVMe Controllers 00:06:00.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.263 Controller IO queue size 128, less than required. 00:06:00.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.263 Controller IO queue size 128, less than required. 00:06:00.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:00.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:00.263 Initialization complete. Launching workers. 00:06:00.263 ======================================================== 00:06:00.263 Latency(us) 00:06:00.263 Device Information : IOPS MiB/s Average min max 00:06:00.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2090.78 1.02 39621.51 1651.93 1080159.89 00:06:00.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19042.42 9.30 6699.40 1429.21 405138.67 00:06:00.263 ======================================================== 00:06:00.263 Total : 21133.20 10.32 9956.49 1429.21 1080159.89 00:06:00.263 00:06:00.263 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.263 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:00.263 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:00.523 true 00:06:00.523 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040474 00:06:00.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3040474) - No such process 00:06:00.523 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3040474 00:06:00.523 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.784 10:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.784 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:00.784 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:00.784 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:00.784 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.784 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:01.044 null0 00:06:01.044 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.044 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.044 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:01.304 null1 00:06:01.304 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.304 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.305 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:01.305 null2 00:06:01.305 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.305 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.305 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:01.565 null3 00:06:01.565 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.565 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.565 10:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:01.826 null4 00:06:01.826 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.826 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.826 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:01.826 null5 00:06:01.826 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.826 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.826 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:02.085 null6 00:06:02.085 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.085 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.085 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:02.345 null7 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.345 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3046962 3046964 3046966 3046967 3046969 3046971 3046973 3046975 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.346 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.606 10:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.606 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.606 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.606 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.868 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.868 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.869 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.131 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.392 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.653 10:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.653 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.653 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.653 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.653 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.913 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.174 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.435 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.436 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.697 10:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.697 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.697 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.697 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.697 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.697 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.697 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.697 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.959 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.221 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.482 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.743 10:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.743 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.743 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.743 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.744 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.744 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.744 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.744 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.744 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.744 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:06.004 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:06.004 rmmod nvme_tcp 00:06:06.265 rmmod nvme_fabrics 00:06:06.265 rmmod nvme_keyring 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3039932 ']' 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3039932 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3039932 ']' 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3039932 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3039932 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3039932' 00:06:06.265 killing process with pid 3039932 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3039932 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3039932 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.265 10:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.818 00:06:08.818 real 0m48.835s 00:06:08.818 user 3m9.976s 00:06:08.818 sys 0m15.447s 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.818 ************************************ 00:06:08.818 END TEST nvmf_ns_hotplug_stress 00:06:08.818 ************************************ 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.818 ************************************ 00:06:08.818 START TEST nvmf_delete_subsystem 00:06:08.818 ************************************ 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.818 * Looking for test storage... 00:06:08.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.818 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.818 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 00:06:08.819 ' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 00:06:08.819 ' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 00:06:08.819 ' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 00:06:08.819 ' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.819 10:48:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.088 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:17.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:17.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:17.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:17.089 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:17.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:06:17.089 00:06:17.089 --- 10.0.0.2 ping statistics --- 00:06:17.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.089 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:06:17.089 00:06:17.089 --- 10.0.0.1 ping statistics --- 00:06:17.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.089 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3052310 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3052310 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3052310 ']' 00:06:17.089 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.090 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.090 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.090 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.090 10:48:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 [2024-11-06 10:48:07.600207] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:06:17.090 [2024-11-06 10:48:07.600274] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.090 [2024-11-06 10:48:07.683985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.090 [2024-11-06 10:48:07.725859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.090 [2024-11-06 10:48:07.725899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.090 [2024-11-06 10:48:07.725907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.090 [2024-11-06 10:48:07.725914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.090 [2024-11-06 10:48:07.725919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.090 [2024-11-06 10:48:07.727275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.090 [2024-11-06 10:48:07.727276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 [2024-11-06 10:48:08.441317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 [2024-11-06 10:48:08.465523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 NULL1 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 Delay0 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.090 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.351 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3052608 00:06:17.351 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:17.351 10:48:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:17.351 [2024-11-06 10:48:08.572390] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:19.263 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:19.263 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.263 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Write completed with error (sct=0, sc=8) 00:06:19.524 starting I/O failed: -6 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.524 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 [2024-11-06 10:48:10.777269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22322c0 is same with the state(6) to be set 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 starting I/O failed: -6 00:06:19.525 [2024-11-06 10:48:10.781881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7e9400d490 is same with the state(6) to be set 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Write completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 Read completed with error (sct=0, sc=8) 00:06:19.525 [2024-11-06 10:48:10.782217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7e94000c40 is same with the state(6) to be set 00:06:20.466 [2024-11-06 10:48:11.754811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22339a0 is same with the state(6) to be set 00:06:20.466 Read completed with error (sct=0, sc=8) 00:06:20.466 Read completed with error (sct=0, sc=8) 00:06:20.466 Write completed with error (sct=0, sc=8) 00:06:20.466 Read completed with error (sct=0, sc=8) 00:06:20.466 Write completed with error (sct=0, sc=8) 00:06:20.466 Read completed with error (sct=0, sc=8) 00:06:20.466 Write completed with error (sct=0, sc=8) 00:06:20.466 Write completed with error (sct=0, sc=8) 00:06:20.466 Read completed with error (sct=0, sc=8) 00:06:20.466 Read completed with error (sct=0, sc=8) 00:06:20.466 Write completed with error (sct=0, sc=8) 00:06:20.466 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 [2024-11-06 10:48:11.781758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22324a0 is same with the state(6) to be set 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 [2024-11-06 10:48:11.781962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2232860 is same with the state(6) to be set 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 [2024-11-06 10:48:11.784007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7e9400d020 is same with the state(6) to be set 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Write completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 Read completed with error (sct=0, sc=8) 00:06:20.467 [2024-11-06 10:48:11.784160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7e9400d7c0 is same with the state(6) to be set 00:06:20.467 Initializing NVMe Controllers 00:06:20.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:20.467 Controller IO queue size 128, less than required. 00:06:20.467 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:20.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:20.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:20.467 Initialization complete. Launching workers. 00:06:20.467 ======================================================== 00:06:20.467 Latency(us) 00:06:20.467 Device Information : IOPS MiB/s Average min max 00:06:20.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.79 0.09 893414.24 307.92 1006781.11 00:06:20.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.93 0.07 1014654.65 355.69 2003222.53 00:06:20.467 ======================================================== 00:06:20.467 Total : 342.72 0.17 947161.81 307.92 2003222.53 00:06:20.467 00:06:20.467 [2024-11-06 10:48:11.784724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22339a0 (9): Bad file descriptor 00:06:20.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:20.467 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.467 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:20.467 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3052608 00:06:20.467 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3052608 00:06:21.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3052608) - No such process 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3052608 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3052608 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3052608 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.038 [2024-11-06 10:48:12.317786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3053753 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:21.038 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.038 [2024-11-06 10:48:12.393769] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:21.609 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.609 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:21.609 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.180 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.180 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:22.180 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.439 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.439 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:22.439 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.010 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.010 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:23.010 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.581 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.581 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:23.581 10:48:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.152 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.152 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:24.152 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.413 Initializing NVMe Controllers 00:06:24.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:24.413 Controller IO queue size 128, less than required. 00:06:24.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:24.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:24.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:24.413 Initialization complete. Launching workers. 00:06:24.413 ======================================================== 00:06:24.413 Latency(us) 00:06:24.413 Device Information : IOPS MiB/s Average min max 00:06:24.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001985.05 1000202.19 1005896.75 00:06:24.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003370.29 1000124.45 1041268.31 00:06:24.413 ======================================================== 00:06:24.413 Total : 256.00 0.12 1002677.67 1000124.45 1041268.31 00:06:24.413 00:06:24.673 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.673 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3053753 00:06:24.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3053753) - No such process 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3053753 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:24.674 rmmod nvme_tcp 00:06:24.674 rmmod nvme_fabrics 00:06:24.674 rmmod nvme_keyring 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3052310 ']' 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3052310 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3052310 ']' 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3052310 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.674 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3052310 00:06:24.674 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.674 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.674 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3052310' 00:06:24.674 killing process with pid 3052310 00:06:24.674 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3052310 00:06:24.674 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3052310 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.934 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.845 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.845 00:06:26.845 real 0m18.382s 00:06:26.845 user 0m30.974s 00:06:26.845 sys 0m6.936s 00:06:26.845 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.845 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.845 ************************************ 00:06:26.845 END TEST nvmf_delete_subsystem 00:06:26.845 ************************************ 00:06:26.845 10:48:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.845 10:48:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:26.845 10:48:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.845 10:48:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.106 ************************************ 00:06:27.106 START TEST nvmf_host_management 00:06:27.106 ************************************ 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:27.106 * Looking for test storage... 00:06:27.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.106 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.107 --rc genhtml_branch_coverage=1 00:06:27.107 --rc genhtml_function_coverage=1 00:06:27.107 --rc genhtml_legend=1 00:06:27.107 --rc geninfo_all_blocks=1 00:06:27.107 --rc geninfo_unexecuted_blocks=1 00:06:27.107 00:06:27.107 ' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.107 --rc genhtml_branch_coverage=1 00:06:27.107 --rc genhtml_function_coverage=1 00:06:27.107 --rc genhtml_legend=1 00:06:27.107 --rc geninfo_all_blocks=1 00:06:27.107 --rc geninfo_unexecuted_blocks=1 00:06:27.107 00:06:27.107 ' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.107 --rc genhtml_branch_coverage=1 00:06:27.107 --rc genhtml_function_coverage=1 00:06:27.107 --rc genhtml_legend=1 00:06:27.107 --rc geninfo_all_blocks=1 00:06:27.107 --rc geninfo_unexecuted_blocks=1 00:06:27.107 00:06:27.107 ' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.107 --rc genhtml_branch_coverage=1 00:06:27.107 --rc genhtml_function_coverage=1 00:06:27.107 --rc genhtml_legend=1 00:06:27.107 --rc geninfo_all_blocks=1 00:06:27.107 --rc geninfo_unexecuted_blocks=1 00:06:27.107 00:06:27.107 ' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.107 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.368 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:27.368 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:27.368 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.368 10:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:35.509 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:35.509 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:35.509 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:35.509 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.509 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:06:35.510 00:06:35.510 --- 10.0.0.2 ping statistics --- 00:06:35.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.510 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:06:35.510 00:06:35.510 --- 10.0.0.1 ping statistics --- 00:06:35.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.510 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3058774 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3058774 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3058774 ']' 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.510 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.510 [2024-11-06 10:48:25.997650] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:06:35.510 [2024-11-06 10:48:25.997703] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.510 [2024-11-06 10:48:26.091741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.510 [2024-11-06 10:48:26.132234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.510 [2024-11-06 10:48:26.132287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.510 [2024-11-06 10:48:26.132296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.510 [2024-11-06 10:48:26.132303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.510 [2024-11-06 10:48:26.132312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.510 [2024-11-06 10:48:26.133935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.510 [2024-11-06 10:48:26.134095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.510 [2024-11-06 10:48:26.134251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.510 [2024-11-06 10:48:26.134253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.510 [2024-11-06 10:48:26.842320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.510 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.510 Malloc0 00:06:35.510 [2024-11-06 10:48:26.917975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3059124 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3059124 /var/tmp/bdevperf.sock 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3059124 ']' 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:35.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:35.772 { 00:06:35.772 "params": { 00:06:35.772 "name": "Nvme$subsystem", 00:06:35.772 "trtype": "$TEST_TRANSPORT", 00:06:35.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:35.772 "adrfam": "ipv4", 00:06:35.772 "trsvcid": "$NVMF_PORT", 00:06:35.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:35.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:35.772 "hdgst": ${hdgst:-false}, 00:06:35.772 "ddgst": ${ddgst:-false} 00:06:35.772 }, 00:06:35.772 "method": "bdev_nvme_attach_controller" 00:06:35.772 } 00:06:35.772 EOF 00:06:35.772 )") 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:35.772 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:35.772 "params": { 00:06:35.772 "name": "Nvme0", 00:06:35.772 "trtype": "tcp", 00:06:35.772 "traddr": "10.0.0.2", 00:06:35.772 "adrfam": "ipv4", 00:06:35.772 "trsvcid": "4420", 00:06:35.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:35.772 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:35.772 "hdgst": false, 00:06:35.772 "ddgst": false 00:06:35.772 }, 00:06:35.772 "method": "bdev_nvme_attach_controller" 00:06:35.772 }' 00:06:35.772 [2024-11-06 10:48:27.024718] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:06:35.772 [2024-11-06 10:48:27.024779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059124 ] 00:06:35.772 [2024-11-06 10:48:27.095932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.772 [2024-11-06 10:48:27.132034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.032 Running I/O for 10 seconds... 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.605 [2024-11-06 10:48:27.905152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.605 [2024-11-06 10:48:27.905303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3130 is same with the state(6) to be set 00:06:36.606 [2024-11-06 10:48:27.905753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.905983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.905993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.906001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.906010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.906017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.906027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.906034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.906043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.906050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.906060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.906067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.906078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.606 [2024-11-06 10:48:27.906086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.606 [2024-11-06 10:48:27.906095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.607 [2024-11-06 10:48:27.906763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.607 [2024-11-06 10:48:27.906773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.608 [2024-11-06 10:48:27.906780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.608 [2024-11-06 10:48:27.906789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.608 [2024-11-06 10:48:27.906796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.608 [2024-11-06 10:48:27.906806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.608 [2024-11-06 10:48:27.906813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.608 [2024-11-06 10:48:27.906823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.608 [2024-11-06 10:48:27.906830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.608 [2024-11-06 10:48:27.906839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.608 [2024-11-06 10:48:27.906847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.608 [2024-11-06 10:48:27.906856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.608 [2024-11-06 10:48:27.906864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.608 [2024-11-06 10:48:27.906873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.608 [2024-11-06 10:48:27.906880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:36.608 [2024-11-06 10:48:27.906889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca71f0 is same with the state(6) to be set 00:06:36.608 [2024-11-06 10:48:27.908166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:36.608 task offset: 90112 on job bdev=Nvme0n1 fails 00:06:36.608 00:06:36.608 Latency(us) 00:06:36.608 [2024-11-06T09:48:28.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.608 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:36.608 Job: Nvme0n1 ended in about 0.46 seconds with error 00:06:36.608 Verification LBA range: start 0x0 length 0x400 00:06:36.608 Nvme0n1 : 0.46 1518.34 94.90 138.03 0.00 37558.42 5488.64 32986.45 00:06:36.608 [2024-11-06T09:48:28.030Z] =================================================================================================================== 00:06:36.608 [2024-11-06T09:48:28.030Z] Total : 1518.34 94.90 138.03 0.00 37558.42 5488.64 32986.45 00:06:36.608 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.608 [2024-11-06 10:48:27.910245] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.608 [2024-11-06 10:48:27.910269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8e000 (9): Bad file descriptor 00:06:36.608 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:36.608 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.608 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.608 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.608 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:36.608 [2024-11-06 10:48:27.925019] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3059124 00:06:37.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3059124) - No such process 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:37.550 { 00:06:37.550 "params": { 00:06:37.550 "name": "Nvme$subsystem", 00:06:37.550 "trtype": "$TEST_TRANSPORT", 00:06:37.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:37.550 "adrfam": "ipv4", 00:06:37.550 "trsvcid": "$NVMF_PORT", 00:06:37.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:37.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:37.550 "hdgst": ${hdgst:-false}, 00:06:37.550 "ddgst": ${ddgst:-false} 00:06:37.550 }, 00:06:37.550 "method": "bdev_nvme_attach_controller" 00:06:37.550 } 00:06:37.550 EOF 00:06:37.550 )") 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:37.550 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:37.550 "params": { 00:06:37.550 "name": "Nvme0", 00:06:37.550 "trtype": "tcp", 00:06:37.550 "traddr": "10.0.0.2", 00:06:37.550 "adrfam": "ipv4", 00:06:37.550 "trsvcid": "4420", 00:06:37.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:37.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:37.550 "hdgst": false, 00:06:37.550 "ddgst": false 00:06:37.550 }, 00:06:37.550 "method": "bdev_nvme_attach_controller" 00:06:37.550 }' 00:06:37.811 [2024-11-06 10:48:28.981787] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:06:37.811 [2024-11-06 10:48:28.981846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059496 ] 00:06:37.811 [2024-11-06 10:48:29.052632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.811 [2024-11-06 10:48:29.087662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.070 Running I/O for 1 seconds... 00:06:39.011 1600.00 IOPS, 100.00 MiB/s 00:06:39.011 Latency(us) 00:06:39.011 [2024-11-06T09:48:30.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.011 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:39.011 Verification LBA range: start 0x0 length 0x400 00:06:39.011 Nvme0n1 : 1.02 1627.19 101.70 0.00 0.00 38627.73 6198.61 31238.83 00:06:39.011 [2024-11-06T09:48:30.433Z] =================================================================================================================== 00:06:39.011 [2024-11-06T09:48:30.433Z] Total : 1627.19 101.70 0.00 0.00 38627.73 6198.61 31238.83 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:39.272 rmmod nvme_tcp 00:06:39.272 rmmod nvme_fabrics 00:06:39.272 rmmod nvme_keyring 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3058774 ']' 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3058774 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3058774 ']' 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3058774 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3058774 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3058774' 00:06:39.272 killing process with pid 3058774 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3058774 00:06:39.272 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3058774 00:06:39.533 [2024-11-06 10:48:30.729784] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.533 10:48:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.444 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:41.444 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:41.444 00:06:41.444 real 0m14.546s 00:06:41.444 user 0m23.184s 00:06:41.444 sys 0m6.604s 00:06:41.444 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.444 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.444 ************************************ 00:06:41.444 END TEST nvmf_host_management 00:06:41.444 ************************************ 00:06:41.705 10:48:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:41.705 10:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:41.705 10:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.705 10:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.705 ************************************ 00:06:41.705 START TEST nvmf_lvol 00:06:41.705 ************************************ 00:06:41.705 10:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:41.705 * Looking for test storage... 00:06:41.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.705 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.706 --rc genhtml_branch_coverage=1 00:06:41.706 --rc genhtml_function_coverage=1 00:06:41.706 --rc genhtml_legend=1 00:06:41.706 --rc geninfo_all_blocks=1 00:06:41.706 --rc geninfo_unexecuted_blocks=1 00:06:41.706 00:06:41.706 ' 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.706 --rc genhtml_branch_coverage=1 00:06:41.706 --rc genhtml_function_coverage=1 00:06:41.706 --rc genhtml_legend=1 00:06:41.706 --rc geninfo_all_blocks=1 00:06:41.706 --rc geninfo_unexecuted_blocks=1 00:06:41.706 00:06:41.706 ' 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.706 --rc genhtml_branch_coverage=1 00:06:41.706 --rc genhtml_function_coverage=1 00:06:41.706 --rc genhtml_legend=1 00:06:41.706 --rc geninfo_all_blocks=1 00:06:41.706 --rc geninfo_unexecuted_blocks=1 00:06:41.706 00:06:41.706 ' 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:41.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.706 --rc genhtml_branch_coverage=1 00:06:41.706 --rc genhtml_function_coverage=1 00:06:41.706 --rc genhtml_legend=1 00:06:41.706 --rc geninfo_all_blocks=1 00:06:41.706 --rc geninfo_unexecuted_blocks=1 00:06:41.706 00:06:41.706 ' 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.706 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.967 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:41.967 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:41.967 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.967 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.967 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.967 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.967 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:41.968 10:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:50.112 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:50.112 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.112 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:50.113 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:50.113 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:06:50.113 00:06:50.113 --- 10.0.0.2 ping statistics --- 00:06:50.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.113 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:06:50.113 00:06:50.113 --- 10.0.0.1 ping statistics --- 00:06:50.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.113 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3063964 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3063964 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3063964 ']' 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.113 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.113 [2024-11-06 10:48:40.502351] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:06:50.113 [2024-11-06 10:48:40.502423] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.113 [2024-11-06 10:48:40.585030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.113 [2024-11-06 10:48:40.626868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.113 [2024-11-06 10:48:40.626903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.113 [2024-11-06 10:48:40.626911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.113 [2024-11-06 10:48:40.626918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.113 [2024-11-06 10:48:40.626924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.113 [2024-11-06 10:48:40.628494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.113 [2024-11-06 10:48:40.628611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.113 [2024-11-06 10:48:40.628614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:50.113 [2024-11-06 10:48:41.491770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.113 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.373 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:50.373 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.633 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:50.633 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:51.206 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:51.206 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a0e5b416-110c-404f-a7d2-689aa75a7f24 00:06:51.206 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0e5b416-110c-404f-a7d2-689aa75a7f24 lvol 20 00:06:51.206 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f71e6378-701f-4d69-9103-4870650773b8 00:06:51.206 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:51.206 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f71e6378-701f-4d69-9103-4870650773b8 00:06:51.466 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:51.726 [2024-11-06 10:48:42.961774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.726 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.985 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3064559 00:06:51.985 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:51.985 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:52.927 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f71e6378-701f-4d69-9103-4870650773b8 MY_SNAPSHOT 00:06:53.187 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7f94435e-5016-4aac-9e73-a071d3e49da9 00:06:53.187 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f71e6378-701f-4d69-9103-4870650773b8 30 00:06:53.447 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7f94435e-5016-4aac-9e73-a071d3e49da9 MY_CLONE 00:06:53.447 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f8fb6ad8-40e3-4db7-8177-a346b6c5ddf5 00:06:53.447 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f8fb6ad8-40e3-4db7-8177-a346b6c5ddf5 00:06:54.020 10:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3064559 00:07:04.016 Initializing NVMe Controllers 00:07:04.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:04.016 Controller IO queue size 128, less than required. 00:07:04.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:04.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:04.016 Initialization complete. Launching workers. 00:07:04.016 ======================================================== 00:07:04.016 Latency(us) 00:07:04.016 Device Information : IOPS MiB/s Average min max 00:07:04.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12183.60 47.59 10507.75 1652.28 47685.35 00:07:04.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17333.40 67.71 7386.67 592.12 60341.93 00:07:04.016 ======================================================== 00:07:04.016 Total : 29517.00 115.30 8674.94 592.12 60341.93 00:07:04.016 00:07:04.017 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:04.017 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f71e6378-701f-4d69-9103-4870650773b8 00:07:04.017 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0e5b416-110c-404f-a7d2-689aa75a7f24 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.017 rmmod nvme_tcp 00:07:04.017 rmmod nvme_fabrics 00:07:04.017 rmmod nvme_keyring 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3063964 ']' 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3063964 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3063964 ']' 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3063964 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3063964 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3063964' 00:07:04.017 killing process with pid 3063964 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3063964 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3063964 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.017 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.403 00:07:05.403 real 0m23.589s 00:07:05.403 user 1m4.455s 00:07:05.403 sys 0m8.293s 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.403 ************************************ 00:07:05.403 END TEST nvmf_lvol 00:07:05.403 ************************************ 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.403 ************************************ 00:07:05.403 START TEST nvmf_lvs_grow 00:07:05.403 ************************************ 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.403 * Looking for test storage... 00:07:05.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:05.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.403 --rc genhtml_branch_coverage=1 00:07:05.403 --rc genhtml_function_coverage=1 00:07:05.403 --rc genhtml_legend=1 00:07:05.403 --rc geninfo_all_blocks=1 00:07:05.403 --rc geninfo_unexecuted_blocks=1 00:07:05.403 00:07:05.403 ' 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:05.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.403 --rc genhtml_branch_coverage=1 00:07:05.403 --rc genhtml_function_coverage=1 00:07:05.403 --rc genhtml_legend=1 00:07:05.403 --rc geninfo_all_blocks=1 00:07:05.403 --rc geninfo_unexecuted_blocks=1 00:07:05.403 00:07:05.403 ' 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:05.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.403 --rc genhtml_branch_coverage=1 00:07:05.403 --rc genhtml_function_coverage=1 00:07:05.403 --rc genhtml_legend=1 00:07:05.403 --rc geninfo_all_blocks=1 00:07:05.403 --rc geninfo_unexecuted_blocks=1 00:07:05.403 00:07:05.403 ' 00:07:05.403 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:05.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.403 --rc genhtml_branch_coverage=1 00:07:05.403 --rc genhtml_function_coverage=1 00:07:05.403 --rc genhtml_legend=1 00:07:05.403 --rc geninfo_all_blocks=1 00:07:05.403 --rc geninfo_unexecuted_blocks=1 00:07:05.404 00:07:05.404 ' 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.404 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:13.545 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:13.545 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:13.545 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:13.545 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.545 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:07:13.546 00:07:13.546 --- 10.0.0.2 ping statistics --- 00:07:13.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.546 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:07:13.546 00:07:13.546 --- 10.0.0.1 ping statistics --- 00:07:13.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.546 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.546 10:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3070925 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3070925 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3070925 ']' 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:13.546 [2024-11-06 10:49:04.051216] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:13.546 [2024-11-06 10:49:04.051272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.546 [2024-11-06 10:49:04.127487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.546 [2024-11-06 10:49:04.162157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.546 [2024-11-06 10:49:04.162190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.546 [2024-11-06 10:49:04.162198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.546 [2024-11-06 10:49:04.162205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.546 [2024-11-06 10:49:04.162211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.546 [2024-11-06 10:49:04.162788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:13.546 [2024-11-06 10:49:04.445651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:13.546 ************************************ 00:07:13.546 START TEST lvs_grow_clean 00:07:13.546 ************************************ 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=86bef126-dc08-41f6-82a2-d0c81c253007 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:13.546 10:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:13.806 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:13.806 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:13.806 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86bef126-dc08-41f6-82a2-d0c81c253007 lvol 150 00:07:14.066 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=702520d6-b667-4f91-a206-473596e1f3b9 00:07:14.066 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:14.066 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:14.066 [2024-11-06 10:49:05.368906] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:14.066 [2024-11-06 10:49:05.368956] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:14.066 true 00:07:14.066 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:14.066 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:14.326 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:14.326 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:14.326 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 702520d6-b667-4f91-a206-473596e1f3b9 00:07:14.588 10:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:14.849 [2024-11-06 10:49:06.026920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3071486 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3071486 /var/tmp/bdevperf.sock 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3071486 ']' 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.849 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:14.849 [2024-11-06 10:49:06.233905] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:14.849 [2024-11-06 10:49:06.233946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071486 ] 00:07:15.109 [2024-11-06 10:49:06.313937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.109 [2024-11-06 10:49:06.349632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.680 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.680 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:15.680 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:16.251 Nvme0n1 00:07:16.251 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:16.251 [ 00:07:16.251 { 00:07:16.251 "name": "Nvme0n1", 00:07:16.251 "aliases": [ 00:07:16.251 "702520d6-b667-4f91-a206-473596e1f3b9" 00:07:16.251 ], 00:07:16.251 "product_name": "NVMe disk", 00:07:16.251 "block_size": 4096, 00:07:16.251 "num_blocks": 38912, 00:07:16.251 "uuid": "702520d6-b667-4f91-a206-473596e1f3b9", 00:07:16.251 "numa_id": 0, 00:07:16.251 "assigned_rate_limits": { 00:07:16.251 "rw_ios_per_sec": 0, 00:07:16.251 "rw_mbytes_per_sec": 0, 00:07:16.251 "r_mbytes_per_sec": 0, 00:07:16.251 "w_mbytes_per_sec": 0 00:07:16.251 }, 00:07:16.251 "claimed": false, 00:07:16.251 "zoned": false, 00:07:16.251 "supported_io_types": { 00:07:16.251 "read": true, 00:07:16.251 "write": true, 00:07:16.251 "unmap": true, 00:07:16.251 "flush": true, 00:07:16.251 "reset": true, 00:07:16.251 "nvme_admin": true, 00:07:16.251 "nvme_io": true, 00:07:16.251 "nvme_io_md": false, 00:07:16.251 "write_zeroes": true, 00:07:16.251 "zcopy": false, 00:07:16.251 "get_zone_info": false, 00:07:16.251 "zone_management": false, 00:07:16.251 "zone_append": false, 00:07:16.251 "compare": true, 00:07:16.251 "compare_and_write": true, 00:07:16.251 "abort": true, 00:07:16.251 "seek_hole": false, 00:07:16.251 "seek_data": false, 00:07:16.251 "copy": true, 00:07:16.251 "nvme_iov_md": false 00:07:16.251 }, 00:07:16.251 "memory_domains": [ 00:07:16.251 { 00:07:16.251 "dma_device_id": "system", 00:07:16.251 "dma_device_type": 1 00:07:16.251 } 00:07:16.251 ], 00:07:16.251 "driver_specific": { 00:07:16.251 "nvme": [ 00:07:16.251 { 00:07:16.251 "trid": { 00:07:16.251 "trtype": "TCP", 00:07:16.251 "adrfam": "IPv4", 00:07:16.251 "traddr": "10.0.0.2", 00:07:16.251 "trsvcid": "4420", 00:07:16.251 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:16.251 }, 00:07:16.251 "ctrlr_data": { 00:07:16.251 "cntlid": 1, 00:07:16.251 "vendor_id": "0x8086", 00:07:16.251 "model_number": "SPDK bdev Controller", 00:07:16.251 "serial_number": "SPDK0", 00:07:16.251 "firmware_revision": "25.01", 00:07:16.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.251 "oacs": { 00:07:16.251 "security": 0, 00:07:16.251 "format": 0, 00:07:16.251 "firmware": 0, 00:07:16.251 "ns_manage": 0 00:07:16.251 }, 00:07:16.251 "multi_ctrlr": true, 00:07:16.251 "ana_reporting": false 00:07:16.251 }, 00:07:16.251 "vs": { 00:07:16.251 "nvme_version": "1.3" 00:07:16.251 }, 00:07:16.251 "ns_data": { 00:07:16.251 "id": 1, 00:07:16.251 "can_share": true 00:07:16.251 } 00:07:16.251 } 00:07:16.251 ], 00:07:16.251 "mp_policy": "active_passive" 00:07:16.251 } 00:07:16.251 } 00:07:16.251 ] 00:07:16.251 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3071667 00:07:16.251 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:16.251 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:16.251 Running I/O for 10 seconds... 00:07:17.632 Latency(us) 00:07:17.632 [2024-11-06T09:49:09.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.632 Nvme0n1 : 1.00 17740.00 69.30 0.00 0.00 0.00 0.00 0.00 00:07:17.632 [2024-11-06T09:49:09.054Z] =================================================================================================================== 00:07:17.632 [2024-11-06T09:49:09.054Z] Total : 17740.00 69.30 0.00 0.00 0.00 0.00 0.00 00:07:17.632 00:07:18.259 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:18.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.567 Nvme0n1 : 2.00 17872.50 69.81 0.00 0.00 0.00 0.00 0.00 00:07:18.567 [2024-11-06T09:49:09.989Z] =================================================================================================================== 00:07:18.567 [2024-11-06T09:49:09.989Z] Total : 17872.50 69.81 0.00 0.00 0.00 0.00 0.00 00:07:18.567 00:07:18.567 true 00:07:18.567 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:18.567 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:18.567 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:18.567 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:18.567 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3071667 00:07:19.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.539 Nvme0n1 : 3.00 17942.67 70.09 0.00 0.00 0.00 0.00 0.00 00:07:19.539 [2024-11-06T09:49:10.961Z] =================================================================================================================== 00:07:19.539 [2024-11-06T09:49:10.961Z] Total : 17942.67 70.09 0.00 0.00 0.00 0.00 0.00 00:07:19.539 00:07:20.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.481 Nvme0n1 : 4.00 17976.75 70.22 0.00 0.00 0.00 0.00 0.00 00:07:20.482 [2024-11-06T09:49:11.904Z] =================================================================================================================== 00:07:20.482 [2024-11-06T09:49:11.904Z] Total : 17976.75 70.22 0.00 0.00 0.00 0.00 0.00 00:07:20.482 00:07:21.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.424 Nvme0n1 : 5.00 18027.40 70.42 0.00 0.00 0.00 0.00 0.00 00:07:21.424 [2024-11-06T09:49:12.846Z] =================================================================================================================== 00:07:21.424 [2024-11-06T09:49:12.846Z] Total : 18027.40 70.42 0.00 0.00 0.00 0.00 0.00 00:07:21.424 00:07:22.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.366 Nvme0n1 : 6.00 18048.17 70.50 0.00 0.00 0.00 0.00 0.00 00:07:22.366 [2024-11-06T09:49:13.788Z] =================================================================================================================== 00:07:22.366 [2024-11-06T09:49:13.788Z] Total : 18048.17 70.50 0.00 0.00 0.00 0.00 0.00 00:07:22.366 00:07:23.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.308 Nvme0n1 : 7.00 18071.43 70.59 0.00 0.00 0.00 0.00 0.00 00:07:23.308 [2024-11-06T09:49:14.730Z] =================================================================================================================== 00:07:23.308 [2024-11-06T09:49:14.730Z] Total : 18071.43 70.59 0.00 0.00 0.00 0.00 0.00 00:07:23.308 00:07:24.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.695 Nvme0n1 : 8.00 18088.75 70.66 0.00 0.00 0.00 0.00 0.00 00:07:24.695 [2024-11-06T09:49:16.117Z] =================================================================================================================== 00:07:24.695 [2024-11-06T09:49:16.117Z] Total : 18088.75 70.66 0.00 0.00 0.00 0.00 0.00 00:07:24.695 00:07:25.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.637 Nvme0n1 : 9.00 18098.89 70.70 0.00 0.00 0.00 0.00 0.00 00:07:25.637 [2024-11-06T09:49:17.059Z] =================================================================================================================== 00:07:25.637 [2024-11-06T09:49:17.059Z] Total : 18098.89 70.70 0.00 0.00 0.00 0.00 0.00 00:07:25.637 00:07:26.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.578 Nvme0n1 : 10.00 18110.40 70.74 0.00 0.00 0.00 0.00 0.00 00:07:26.578 [2024-11-06T09:49:18.000Z] =================================================================================================================== 00:07:26.578 [2024-11-06T09:49:18.000Z] Total : 18110.40 70.74 0.00 0.00 0.00 0.00 0.00 00:07:26.578 00:07:26.578 00:07:26.578 Latency(us) 00:07:26.578 [2024-11-06T09:49:18.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.578 Nvme0n1 : 10.00 18115.58 70.76 0.00 0.00 7063.45 4369.07 16165.55 00:07:26.578 [2024-11-06T09:49:18.000Z] =================================================================================================================== 00:07:26.578 [2024-11-06T09:49:18.000Z] Total : 18115.58 70.76 0.00 0.00 7063.45 4369.07 16165.55 00:07:26.578 { 00:07:26.578 "results": [ 00:07:26.578 { 00:07:26.578 "job": "Nvme0n1", 00:07:26.578 "core_mask": "0x2", 00:07:26.578 "workload": "randwrite", 00:07:26.578 "status": "finished", 00:07:26.578 "queue_depth": 128, 00:07:26.578 "io_size": 4096, 00:07:26.578 "runtime": 10.004206, 00:07:26.578 "iops": 18115.58058680519, 00:07:26.578 "mibps": 70.76398666720777, 00:07:26.578 "io_failed": 0, 00:07:26.578 "io_timeout": 0, 00:07:26.578 "avg_latency_us": 7063.447568935581, 00:07:26.578 "min_latency_us": 4369.066666666667, 00:07:26.578 "max_latency_us": 16165.546666666667 00:07:26.578 } 00:07:26.578 ], 00:07:26.578 "core_count": 1 00:07:26.578 } 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3071486 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3071486 ']' 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3071486 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3071486 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3071486' 00:07:26.578 killing process with pid 3071486 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3071486 00:07:26.578 Received shutdown signal, test time was about 10.000000 seconds 00:07:26.578 00:07:26.578 Latency(us) 00:07:26.578 [2024-11-06T09:49:18.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.578 [2024-11-06T09:49:18.000Z] =================================================================================================================== 00:07:26.578 [2024-11-06T09:49:18.000Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3071486 00:07:26.578 10:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.840 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.101 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:27.101 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:27.101 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:27.101 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:27.101 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.361 [2024-11-06 10:49:18.588998] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:27.361 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:27.623 request: 00:07:27.623 { 00:07:27.623 "uuid": "86bef126-dc08-41f6-82a2-d0c81c253007", 00:07:27.623 "method": "bdev_lvol_get_lvstores", 00:07:27.623 "req_id": 1 00:07:27.623 } 00:07:27.623 Got JSON-RPC error response 00:07:27.623 response: 00:07:27.623 { 00:07:27.623 "code": -19, 00:07:27.623 "message": "No such device" 00:07:27.623 } 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.623 aio_bdev 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 702520d6-b667-4f91-a206-473596e1f3b9 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=702520d6-b667-4f91-a206-473596e1f3b9 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:27.623 10:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.883 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 702520d6-b667-4f91-a206-473596e1f3b9 -t 2000 00:07:27.883 [ 00:07:27.883 { 00:07:27.883 "name": "702520d6-b667-4f91-a206-473596e1f3b9", 00:07:27.883 "aliases": [ 00:07:27.883 "lvs/lvol" 00:07:27.883 ], 00:07:27.883 "product_name": "Logical Volume", 00:07:27.883 "block_size": 4096, 00:07:27.883 "num_blocks": 38912, 00:07:27.883 "uuid": "702520d6-b667-4f91-a206-473596e1f3b9", 00:07:27.883 "assigned_rate_limits": { 00:07:27.883 "rw_ios_per_sec": 0, 00:07:27.883 "rw_mbytes_per_sec": 0, 00:07:27.883 "r_mbytes_per_sec": 0, 00:07:27.883 "w_mbytes_per_sec": 0 00:07:27.883 }, 00:07:27.883 "claimed": false, 00:07:27.883 "zoned": false, 00:07:27.883 "supported_io_types": { 00:07:27.883 "read": true, 00:07:27.883 "write": true, 00:07:27.883 "unmap": true, 00:07:27.883 "flush": false, 00:07:27.883 "reset": true, 00:07:27.883 "nvme_admin": false, 00:07:27.883 "nvme_io": false, 00:07:27.883 "nvme_io_md": false, 00:07:27.883 "write_zeroes": true, 00:07:27.883 "zcopy": false, 00:07:27.883 "get_zone_info": false, 00:07:27.883 "zone_management": false, 00:07:27.883 "zone_append": false, 00:07:27.883 "compare": false, 00:07:27.883 "compare_and_write": false, 00:07:27.883 "abort": false, 00:07:27.883 "seek_hole": true, 00:07:27.883 "seek_data": true, 00:07:27.883 "copy": false, 00:07:27.883 "nvme_iov_md": false 00:07:27.883 }, 00:07:27.883 "driver_specific": { 00:07:27.883 "lvol": { 00:07:27.883 "lvol_store_uuid": "86bef126-dc08-41f6-82a2-d0c81c253007", 00:07:27.883 "base_bdev": "aio_bdev", 00:07:27.883 "thin_provision": false, 00:07:27.883 "num_allocated_clusters": 38, 00:07:27.883 "snapshot": false, 00:07:27.883 "clone": false, 00:07:27.883 "esnap_clone": false 00:07:27.883 } 00:07:27.883 } 00:07:27.883 } 00:07:27.883 ] 00:07:27.884 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:27.884 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:27.884 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:28.144 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:28.144 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:28.145 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:28.405 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:28.405 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 702520d6-b667-4f91-a206-473596e1f3b9 00:07:28.405 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86bef126-dc08-41f6-82a2-d0c81c253007 00:07:28.664 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.925 00:07:28.925 real 0m15.665s 00:07:28.925 user 0m15.429s 00:07:28.925 sys 0m1.293s 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:28.925 ************************************ 00:07:28.925 END TEST lvs_grow_clean 00:07:28.925 ************************************ 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.925 ************************************ 00:07:28.925 START TEST lvs_grow_dirty 00:07:28.925 ************************************ 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.925 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.185 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:29.185 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:29.445 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f653377e-5b91-493a-a564-e668b43d7abf 00:07:29.445 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:29.445 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:29.445 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:29.445 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:29.445 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f653377e-5b91-493a-a564-e668b43d7abf lvol 150 00:07:29.705 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=812eacb0-9d64-4580-b3aa-2aedb0c5eaac 00:07:29.705 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.705 10:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:29.705 [2024-11-06 10:49:21.096358] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:29.705 [2024-11-06 10:49:21.096411] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:29.705 true 00:07:29.705 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:29.705 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:29.965 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:29.965 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.225 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 812eacb0-9d64-4580-b3aa-2aedb0c5eaac 00:07:30.225 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.485 [2024-11-06 10:49:21.762367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.486 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3074727 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3074727 /var/tmp/bdevperf.sock 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3074727 ']' 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.746 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.746 [2024-11-06 10:49:22.012795] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:30.746 [2024-11-06 10:49:22.012849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074727 ] 00:07:30.746 [2024-11-06 10:49:22.095124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.746 [2024-11-06 10:49:22.124907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.687 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.687 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:31.688 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:31.948 Nvme0n1 00:07:31.948 10:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:31.948 [ 00:07:31.948 { 00:07:31.948 "name": "Nvme0n1", 00:07:31.948 "aliases": [ 00:07:31.948 "812eacb0-9d64-4580-b3aa-2aedb0c5eaac" 00:07:31.948 ], 00:07:31.948 "product_name": "NVMe disk", 00:07:31.948 "block_size": 4096, 00:07:31.948 "num_blocks": 38912, 00:07:31.948 "uuid": "812eacb0-9d64-4580-b3aa-2aedb0c5eaac", 00:07:31.948 "numa_id": 0, 00:07:31.948 "assigned_rate_limits": { 00:07:31.948 "rw_ios_per_sec": 0, 00:07:31.948 "rw_mbytes_per_sec": 0, 00:07:31.948 "r_mbytes_per_sec": 0, 00:07:31.948 "w_mbytes_per_sec": 0 00:07:31.948 }, 00:07:31.948 "claimed": false, 00:07:31.948 "zoned": false, 00:07:31.948 "supported_io_types": { 00:07:31.948 "read": true, 00:07:31.948 "write": true, 00:07:31.948 "unmap": true, 00:07:31.948 "flush": true, 00:07:31.948 "reset": true, 00:07:31.948 "nvme_admin": true, 00:07:31.948 "nvme_io": true, 00:07:31.948 "nvme_io_md": false, 00:07:31.948 "write_zeroes": true, 00:07:31.948 "zcopy": false, 00:07:31.948 "get_zone_info": false, 00:07:31.948 "zone_management": false, 00:07:31.948 "zone_append": false, 00:07:31.948 "compare": true, 00:07:31.948 "compare_and_write": true, 00:07:31.948 "abort": true, 00:07:31.948 "seek_hole": false, 00:07:31.948 "seek_data": false, 00:07:31.948 "copy": true, 00:07:31.948 "nvme_iov_md": false 00:07:31.948 }, 00:07:31.948 "memory_domains": [ 00:07:31.948 { 00:07:31.948 "dma_device_id": "system", 00:07:31.948 "dma_device_type": 1 00:07:31.948 } 00:07:31.948 ], 00:07:31.948 "driver_specific": { 00:07:31.948 "nvme": [ 00:07:31.948 { 00:07:31.948 "trid": { 00:07:31.948 "trtype": "TCP", 00:07:31.948 "adrfam": "IPv4", 00:07:31.948 "traddr": "10.0.0.2", 00:07:31.948 "trsvcid": "4420", 00:07:31.948 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:31.948 }, 00:07:31.948 "ctrlr_data": { 00:07:31.948 "cntlid": 1, 00:07:31.948 "vendor_id": "0x8086", 00:07:31.948 "model_number": "SPDK bdev Controller", 00:07:31.948 "serial_number": "SPDK0", 00:07:31.948 "firmware_revision": "25.01", 00:07:31.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.948 "oacs": { 00:07:31.948 "security": 0, 00:07:31.948 "format": 0, 00:07:31.948 "firmware": 0, 00:07:31.948 "ns_manage": 0 00:07:31.948 }, 00:07:31.948 "multi_ctrlr": true, 00:07:31.948 "ana_reporting": false 00:07:31.948 }, 00:07:31.948 "vs": { 00:07:31.948 "nvme_version": "1.3" 00:07:31.948 }, 00:07:31.948 "ns_data": { 00:07:31.948 "id": 1, 00:07:31.948 "can_share": true 00:07:31.948 } 00:07:31.948 } 00:07:31.948 ], 00:07:31.948 "mp_policy": "active_passive" 00:07:31.948 } 00:07:31.948 } 00:07:31.948 ] 00:07:31.948 10:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3075019 00:07:31.948 10:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:31.948 10:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:32.209 Running I/O for 10 seconds... 00:07:33.150 Latency(us) 00:07:33.150 [2024-11-06T09:49:24.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.151 Nvme0n1 : 1.00 17162.00 67.04 0.00 0.00 0.00 0.00 0.00 00:07:33.151 [2024-11-06T09:49:24.573Z] =================================================================================================================== 00:07:33.151 [2024-11-06T09:49:24.573Z] Total : 17162.00 67.04 0.00 0.00 0.00 0.00 0.00 00:07:33.151 00:07:34.093 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:34.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.093 Nvme0n1 : 2.00 17281.00 67.50 0.00 0.00 0.00 0.00 0.00 00:07:34.093 [2024-11-06T09:49:25.515Z] =================================================================================================================== 00:07:34.093 [2024-11-06T09:49:25.515Z] Total : 17281.00 67.50 0.00 0.00 0.00 0.00 0.00 00:07:34.093 00:07:34.093 true 00:07:34.354 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:34.354 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:34.354 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:34.354 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:34.354 10:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3075019 00:07:35.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.296 Nvme0n1 : 3.00 17331.33 67.70 0.00 0.00 0.00 0.00 0.00 00:07:35.296 [2024-11-06T09:49:26.718Z] =================================================================================================================== 00:07:35.296 [2024-11-06T09:49:26.718Z] Total : 17331.33 67.70 0.00 0.00 0.00 0.00 0.00 00:07:35.296 00:07:36.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.241 Nvme0n1 : 4.00 17370.50 67.85 0.00 0.00 0.00 0.00 0.00 00:07:36.241 [2024-11-06T09:49:27.663Z] =================================================================================================================== 00:07:36.241 [2024-11-06T09:49:27.663Z] Total : 17370.50 67.85 0.00 0.00 0.00 0.00 0.00 00:07:36.241 00:07:37.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.181 Nvme0n1 : 5.00 17397.20 67.96 0.00 0.00 0.00 0.00 0.00 00:07:37.181 [2024-11-06T09:49:28.603Z] =================================================================================================================== 00:07:37.181 [2024-11-06T09:49:28.603Z] Total : 17397.20 67.96 0.00 0.00 0.00 0.00 0.00 00:07:37.181 00:07:38.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.121 Nvme0n1 : 6.00 17421.67 68.05 0.00 0.00 0.00 0.00 0.00 00:07:38.121 [2024-11-06T09:49:29.543Z] =================================================================================================================== 00:07:38.121 [2024-11-06T09:49:29.543Z] Total : 17421.67 68.05 0.00 0.00 0.00 0.00 0.00 00:07:38.121 00:07:39.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.061 Nvme0n1 : 7.00 17436.86 68.11 0.00 0.00 0.00 0.00 0.00 00:07:39.061 [2024-11-06T09:49:30.483Z] =================================================================================================================== 00:07:39.061 [2024-11-06T09:49:30.483Z] Total : 17436.86 68.11 0.00 0.00 0.00 0.00 0.00 00:07:39.061 00:07:40.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.444 Nvme0n1 : 8.00 17451.25 68.17 0.00 0.00 0.00 0.00 0.00 00:07:40.444 [2024-11-06T09:49:31.866Z] =================================================================================================================== 00:07:40.444 [2024-11-06T09:49:31.866Z] Total : 17451.25 68.17 0.00 0.00 0.00 0.00 0.00 00:07:40.444 00:07:41.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.383 Nvme0n1 : 9.00 17463.33 68.22 0.00 0.00 0.00 0.00 0.00 00:07:41.383 [2024-11-06T09:49:32.805Z] =================================================================================================================== 00:07:41.383 [2024-11-06T09:49:32.805Z] Total : 17463.33 68.22 0.00 0.00 0.00 0.00 0.00 00:07:41.383 00:07:42.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.324 Nvme0n1 : 10.00 17473.80 68.26 0.00 0.00 0.00 0.00 0.00 00:07:42.324 [2024-11-06T09:49:33.746Z] =================================================================================================================== 00:07:42.324 [2024-11-06T09:49:33.746Z] Total : 17473.80 68.26 0.00 0.00 0.00 0.00 0.00 00:07:42.324 00:07:42.324 00:07:42.324 Latency(us) 00:07:42.324 [2024-11-06T09:49:33.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.324 Nvme0n1 : 10.01 17473.84 68.26 0.00 0.00 7320.63 6089.39 15182.51 00:07:42.324 [2024-11-06T09:49:33.746Z] =================================================================================================================== 00:07:42.324 [2024-11-06T09:49:33.746Z] Total : 17473.84 68.26 0.00 0.00 7320.63 6089.39 15182.51 00:07:42.324 { 00:07:42.324 "results": [ 00:07:42.324 { 00:07:42.324 "job": "Nvme0n1", 00:07:42.324 "core_mask": "0x2", 00:07:42.324 "workload": "randwrite", 00:07:42.324 "status": "finished", 00:07:42.324 "queue_depth": 128, 00:07:42.324 "io_size": 4096, 00:07:42.324 "runtime": 10.006844, 00:07:42.324 "iops": 17473.840903285793, 00:07:42.324 "mibps": 68.25719102846013, 00:07:42.324 "io_failed": 0, 00:07:42.324 "io_timeout": 0, 00:07:42.324 "avg_latency_us": 7320.628928158849, 00:07:42.324 "min_latency_us": 6089.386666666666, 00:07:42.324 "max_latency_us": 15182.506666666666 00:07:42.324 } 00:07:42.324 ], 00:07:42.324 "core_count": 1 00:07:42.324 } 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3074727 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3074727 ']' 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3074727 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3074727 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3074727' 00:07:42.324 killing process with pid 3074727 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3074727 00:07:42.324 Received shutdown signal, test time was about 10.000000 seconds 00:07:42.324 00:07:42.324 Latency(us) 00:07:42.324 [2024-11-06T09:49:33.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.324 [2024-11-06T09:49:33.746Z] =================================================================================================================== 00:07:42.324 [2024-11-06T09:49:33.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3074727 00:07:42.324 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.585 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.585 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:42.585 10:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3070925 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3070925 00:07:42.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3070925 Killed "${NVMF_APP[@]}" "$@" 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3077095 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3077095 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3077095 ']' 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.845 10:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.845 [2024-11-06 10:49:34.227141] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:42.845 [2024-11-06 10:49:34.227199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.106 [2024-11-06 10:49:34.303814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.106 [2024-11-06 10:49:34.338994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.106 [2024-11-06 10:49:34.339027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.106 [2024-11-06 10:49:34.339035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.106 [2024-11-06 10:49:34.339042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.106 [2024-11-06 10:49:34.339047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.106 [2024-11-06 10:49:34.339606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.679 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.679 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:43.679 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.679 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:43.679 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.679 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.679 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.940 [2024-11-06 10:49:35.210114] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:43.940 [2024-11-06 10:49:35.210214] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:43.940 [2024-11-06 10:49:35.210244] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 812eacb0-9d64-4580-b3aa-2aedb0c5eaac 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=812eacb0-9d64-4580-b3aa-2aedb0c5eaac 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:43.940 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:44.201 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 812eacb0-9d64-4580-b3aa-2aedb0c5eaac -t 2000 00:07:44.201 [ 00:07:44.201 { 00:07:44.201 "name": "812eacb0-9d64-4580-b3aa-2aedb0c5eaac", 00:07:44.201 "aliases": [ 00:07:44.201 "lvs/lvol" 00:07:44.201 ], 00:07:44.201 "product_name": "Logical Volume", 00:07:44.201 "block_size": 4096, 00:07:44.201 "num_blocks": 38912, 00:07:44.201 "uuid": "812eacb0-9d64-4580-b3aa-2aedb0c5eaac", 00:07:44.201 "assigned_rate_limits": { 00:07:44.201 "rw_ios_per_sec": 0, 00:07:44.201 "rw_mbytes_per_sec": 0, 00:07:44.201 "r_mbytes_per_sec": 0, 00:07:44.201 "w_mbytes_per_sec": 0 00:07:44.201 }, 00:07:44.201 "claimed": false, 00:07:44.201 "zoned": false, 00:07:44.201 "supported_io_types": { 00:07:44.201 "read": true, 00:07:44.201 "write": true, 00:07:44.201 "unmap": true, 00:07:44.201 "flush": false, 00:07:44.201 "reset": true, 00:07:44.201 "nvme_admin": false, 00:07:44.201 "nvme_io": false, 00:07:44.201 "nvme_io_md": false, 00:07:44.201 "write_zeroes": true, 00:07:44.201 "zcopy": false, 00:07:44.201 "get_zone_info": false, 00:07:44.201 "zone_management": false, 00:07:44.201 "zone_append": false, 00:07:44.201 "compare": false, 00:07:44.201 "compare_and_write": false, 00:07:44.201 "abort": false, 00:07:44.201 "seek_hole": true, 00:07:44.201 "seek_data": true, 00:07:44.201 "copy": false, 00:07:44.201 "nvme_iov_md": false 00:07:44.201 }, 00:07:44.201 "driver_specific": { 00:07:44.201 "lvol": { 00:07:44.201 "lvol_store_uuid": "f653377e-5b91-493a-a564-e668b43d7abf", 00:07:44.201 "base_bdev": "aio_bdev", 00:07:44.201 "thin_provision": false, 00:07:44.201 "num_allocated_clusters": 38, 00:07:44.201 "snapshot": false, 00:07:44.201 "clone": false, 00:07:44.201 "esnap_clone": false 00:07:44.201 } 00:07:44.201 } 00:07:44.201 } 00:07:44.201 ] 00:07:44.201 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:44.201 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:44.201 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:44.461 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:44.461 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:44.461 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:44.721 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:44.721 10:49:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.721 [2024-11-06 10:49:36.046274] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.721 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:44.981 request: 00:07:44.981 { 00:07:44.981 "uuid": "f653377e-5b91-493a-a564-e668b43d7abf", 00:07:44.981 "method": "bdev_lvol_get_lvstores", 00:07:44.981 "req_id": 1 00:07:44.981 } 00:07:44.981 Got JSON-RPC error response 00:07:44.981 response: 00:07:44.981 { 00:07:44.981 "code": -19, 00:07:44.981 "message": "No such device" 00:07:44.981 } 00:07:44.981 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:44.981 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.981 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.981 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.982 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.242 aio_bdev 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 812eacb0-9d64-4580-b3aa-2aedb0c5eaac 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=812eacb0-9d64-4580-b3aa-2aedb0c5eaac 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.242 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 812eacb0-9d64-4580-b3aa-2aedb0c5eaac -t 2000 00:07:45.503 [ 00:07:45.503 { 00:07:45.503 "name": "812eacb0-9d64-4580-b3aa-2aedb0c5eaac", 00:07:45.503 "aliases": [ 00:07:45.503 "lvs/lvol" 00:07:45.503 ], 00:07:45.503 "product_name": "Logical Volume", 00:07:45.503 "block_size": 4096, 00:07:45.503 "num_blocks": 38912, 00:07:45.503 "uuid": "812eacb0-9d64-4580-b3aa-2aedb0c5eaac", 00:07:45.503 "assigned_rate_limits": { 00:07:45.503 "rw_ios_per_sec": 0, 00:07:45.503 "rw_mbytes_per_sec": 0, 00:07:45.503 "r_mbytes_per_sec": 0, 00:07:45.503 "w_mbytes_per_sec": 0 00:07:45.503 }, 00:07:45.503 "claimed": false, 00:07:45.503 "zoned": false, 00:07:45.503 "supported_io_types": { 00:07:45.503 "read": true, 00:07:45.503 "write": true, 00:07:45.503 "unmap": true, 00:07:45.503 "flush": false, 00:07:45.503 "reset": true, 00:07:45.503 "nvme_admin": false, 00:07:45.503 "nvme_io": false, 00:07:45.503 "nvme_io_md": false, 00:07:45.503 "write_zeroes": true, 00:07:45.503 "zcopy": false, 00:07:45.503 "get_zone_info": false, 00:07:45.503 "zone_management": false, 00:07:45.503 "zone_append": false, 00:07:45.503 "compare": false, 00:07:45.503 "compare_and_write": false, 00:07:45.503 "abort": false, 00:07:45.503 "seek_hole": true, 00:07:45.503 "seek_data": true, 00:07:45.503 "copy": false, 00:07:45.503 "nvme_iov_md": false 00:07:45.503 }, 00:07:45.503 "driver_specific": { 00:07:45.503 "lvol": { 00:07:45.503 "lvol_store_uuid": "f653377e-5b91-493a-a564-e668b43d7abf", 00:07:45.503 "base_bdev": "aio_bdev", 00:07:45.503 "thin_provision": false, 00:07:45.503 "num_allocated_clusters": 38, 00:07:45.503 "snapshot": false, 00:07:45.503 "clone": false, 00:07:45.503 "esnap_clone": false 00:07:45.503 } 00:07:45.503 } 00:07:45.503 } 00:07:45.503 ] 00:07:45.503 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:45.503 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:45.503 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:45.764 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:45.764 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:45.764 10:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:45.764 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:45.764 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 812eacb0-9d64-4580-b3aa-2aedb0c5eaac 00:07:46.024 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f653377e-5b91-493a-a564-e668b43d7abf 00:07:46.285 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.285 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.285 00:07:46.285 real 0m17.434s 00:07:46.285 user 0m44.893s 00:07:46.285 sys 0m2.979s 00:07:46.285 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.285 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:46.285 ************************************ 00:07:46.285 END TEST lvs_grow_dirty 00:07:46.285 ************************************ 00:07:46.545 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:46.545 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:46.545 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:46.545 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:46.546 nvmf_trace.0 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.546 rmmod nvme_tcp 00:07:46.546 rmmod nvme_fabrics 00:07:46.546 rmmod nvme_keyring 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3077095 ']' 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3077095 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3077095 ']' 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3077095 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3077095 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3077095' 00:07:46.546 killing process with pid 3077095 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3077095 00:07:46.546 10:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3077095 00:07:46.806 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.807 10:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.717 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:48.717 00:07:48.717 real 0m43.532s 00:07:48.717 user 1m6.451s 00:07:48.717 sys 0m10.135s 00:07:48.717 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.717 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.717 ************************************ 00:07:48.717 END TEST nvmf_lvs_grow 00:07:48.717 ************************************ 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.978 ************************************ 00:07:48.978 START TEST nvmf_bdev_io_wait 00:07:48.978 ************************************ 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.978 * Looking for test storage... 00:07:48.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:48.978 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.240 --rc genhtml_branch_coverage=1 00:07:49.240 --rc genhtml_function_coverage=1 00:07:49.240 --rc genhtml_legend=1 00:07:49.240 --rc geninfo_all_blocks=1 00:07:49.240 --rc geninfo_unexecuted_blocks=1 00:07:49.240 00:07:49.240 ' 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.240 --rc genhtml_branch_coverage=1 00:07:49.240 --rc genhtml_function_coverage=1 00:07:49.240 --rc genhtml_legend=1 00:07:49.240 --rc geninfo_all_blocks=1 00:07:49.240 --rc geninfo_unexecuted_blocks=1 00:07:49.240 00:07:49.240 ' 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.240 --rc genhtml_branch_coverage=1 00:07:49.240 --rc genhtml_function_coverage=1 00:07:49.240 --rc genhtml_legend=1 00:07:49.240 --rc geninfo_all_blocks=1 00:07:49.240 --rc geninfo_unexecuted_blocks=1 00:07:49.240 00:07:49.240 ' 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:49.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.240 --rc genhtml_branch_coverage=1 00:07:49.240 --rc genhtml_function_coverage=1 00:07:49.240 --rc genhtml_legend=1 00:07:49.240 --rc geninfo_all_blocks=1 00:07:49.240 --rc geninfo_unexecuted_blocks=1 00:07:49.240 00:07:49.240 ' 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.240 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.241 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:55.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:55.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:55.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:55.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.831 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:55.832 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.094 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.094 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.094 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.094 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.094 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.094 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.094 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:07:56.356 00:07:56.356 --- 10.0.0.2 ping statistics --- 00:07:56.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.356 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:07:56.356 00:07:56.356 --- 10.0.0.1 ping statistics --- 00:07:56.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.356 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3082172 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3082172 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3082172 ']' 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:56.356 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.356 [2024-11-06 10:49:47.649917] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:56.356 [2024-11-06 10:49:47.649985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.356 [2024-11-06 10:49:47.733858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.356 [2024-11-06 10:49:47.776495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.356 [2024-11-06 10:49:47.776532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.356 [2024-11-06 10:49:47.776541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.356 [2024-11-06 10:49:47.776547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.356 [2024-11-06 10:49:47.776554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.617 [2024-11-06 10:49:47.778150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.617 [2024-11-06 10:49:47.778265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.617 [2024-11-06 10:49:47.778423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.617 [2024-11-06 10:49:47.778424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.188 [2024-11-06 10:49:48.562678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.188 Malloc0 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.188 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.448 [2024-11-06 10:49:48.621854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3082233 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3082235 00:07:57.448 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.449 { 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme$subsystem", 00:07:57.449 "trtype": "$TEST_TRANSPORT", 00:07:57.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "$NVMF_PORT", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.449 "hdgst": ${hdgst:-false}, 00:07:57.449 "ddgst": ${ddgst:-false} 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 } 00:07:57.449 EOF 00:07:57.449 )") 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3082238 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3082242 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.449 { 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme$subsystem", 00:07:57.449 "trtype": "$TEST_TRANSPORT", 00:07:57.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "$NVMF_PORT", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.449 "hdgst": ${hdgst:-false}, 00:07:57.449 "ddgst": ${ddgst:-false} 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 } 00:07:57.449 EOF 00:07:57.449 )") 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.449 { 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme$subsystem", 00:07:57.449 "trtype": "$TEST_TRANSPORT", 00:07:57.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "$NVMF_PORT", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.449 "hdgst": ${hdgst:-false}, 00:07:57.449 "ddgst": ${ddgst:-false} 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 } 00:07:57.449 EOF 00:07:57.449 )") 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.449 { 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme$subsystem", 00:07:57.449 "trtype": "$TEST_TRANSPORT", 00:07:57.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "$NVMF_PORT", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.449 "hdgst": ${hdgst:-false}, 00:07:57.449 "ddgst": ${ddgst:-false} 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 } 00:07:57.449 EOF 00:07:57.449 )") 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3082233 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme1", 00:07:57.449 "trtype": "tcp", 00:07:57.449 "traddr": "10.0.0.2", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "4420", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.449 "hdgst": false, 00:07:57.449 "ddgst": false 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 }' 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme1", 00:07:57.449 "trtype": "tcp", 00:07:57.449 "traddr": "10.0.0.2", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "4420", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.449 "hdgst": false, 00:07:57.449 "ddgst": false 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 }' 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme1", 00:07:57.449 "trtype": "tcp", 00:07:57.449 "traddr": "10.0.0.2", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "4420", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.449 "hdgst": false, 00:07:57.449 "ddgst": false 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 }' 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:57.449 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.449 "params": { 00:07:57.449 "name": "Nvme1", 00:07:57.449 "trtype": "tcp", 00:07:57.449 "traddr": "10.0.0.2", 00:07:57.449 "adrfam": "ipv4", 00:07:57.449 "trsvcid": "4420", 00:07:57.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.449 "hdgst": false, 00:07:57.449 "ddgst": false 00:07:57.449 }, 00:07:57.449 "method": "bdev_nvme_attach_controller" 00:07:57.449 }' 00:07:57.449 [2024-11-06 10:49:48.676757] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:57.449 [2024-11-06 10:49:48.676797] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:57.449 [2024-11-06 10:49:48.678241] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:57.449 [2024-11-06 10:49:48.678286] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:57.449 [2024-11-06 10:49:48.680095] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:57.449 [2024-11-06 10:49:48.680140] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:57.449 [2024-11-06 10:49:48.682290] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:07:57.449 [2024-11-06 10:49:48.682385] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:57.449 [2024-11-06 10:49:48.804760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.449 [2024-11-06 10:49:48.833353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:57.449 [2024-11-06 10:49:48.845718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.710 [2024-11-06 10:49:48.874401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:57.710 [2024-11-06 10:49:48.888765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.710 [2024-11-06 10:49:48.917397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:57.710 [2024-11-06 10:49:48.950849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.710 [2024-11-06 10:49:48.979880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:57.710 Running I/O for 1 seconds... 00:07:57.971 Running I/O for 1 seconds... 00:07:57.971 Running I/O for 1 seconds... 00:07:57.971 Running I/O for 1 seconds... 00:07:58.912 18413.00 IOPS, 71.93 MiB/s 00:07:58.912 Latency(us) 00:07:58.912 [2024-11-06T09:49:50.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.912 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:58.912 Nvme1n1 : 1.00 18457.39 72.10 0.00 0.00 6918.96 3031.04 14308.69 00:07:58.912 [2024-11-06T09:49:50.334Z] =================================================================================================================== 00:07:58.912 [2024-11-06T09:49:50.334Z] Total : 18457.39 72.10 0.00 0.00 6918.96 3031.04 14308.69 00:07:58.912 12545.00 IOPS, 49.00 MiB/s 00:07:58.912 Latency(us) 00:07:58.912 [2024-11-06T09:49:50.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.912 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:58.912 Nvme1n1 : 1.01 12617.64 49.29 0.00 0.00 10113.15 4587.52 17913.17 00:07:58.912 [2024-11-06T09:49:50.334Z] =================================================================================================================== 00:07:58.912 [2024-11-06T09:49:50.334Z] Total : 12617.64 49.29 0.00 0.00 10113.15 4587.52 17913.17 00:07:58.912 11634.00 IOPS, 45.45 MiB/s 00:07:58.912 Latency(us) 00:07:58.912 [2024-11-06T09:49:50.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.912 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:58.912 Nvme1n1 : 1.01 11714.31 45.76 0.00 0.00 10893.65 2157.23 17257.81 00:07:58.912 [2024-11-06T09:49:50.334Z] =================================================================================================================== 00:07:58.912 [2024-11-06T09:49:50.334Z] Total : 11714.31 45.76 0.00 0.00 10893.65 2157.23 17257.81 00:07:58.912 189192.00 IOPS, 739.03 MiB/s 00:07:58.912 Latency(us) 00:07:58.912 [2024-11-06T09:49:50.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.912 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:58.912 Nvme1n1 : 1.00 188818.77 737.57 0.00 0.00 674.12 300.37 1966.08 00:07:58.912 [2024-11-06T09:49:50.334Z] =================================================================================================================== 00:07:58.912 [2024-11-06T09:49:50.334Z] Total : 188818.77 737.57 0.00 0.00 674.12 300.37 1966.08 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3082235 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3082238 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3082242 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.912 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.173 rmmod nvme_tcp 00:07:59.173 rmmod nvme_fabrics 00:07:59.173 rmmod nvme_keyring 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3082172 ']' 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3082172 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3082172 ']' 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3082172 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3082172 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3082172' 00:07:59.173 killing process with pid 3082172 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3082172 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3082172 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.173 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.718 00:08:01.718 real 0m12.454s 00:08:01.718 user 0m18.449s 00:08:01.718 sys 0m6.854s 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.718 ************************************ 00:08:01.718 END TEST nvmf_bdev_io_wait 00:08:01.718 ************************************ 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.718 ************************************ 00:08:01.718 START TEST nvmf_queue_depth 00:08:01.718 ************************************ 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:01.718 * Looking for test storage... 00:08:01.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.718 --rc genhtml_branch_coverage=1 00:08:01.718 --rc genhtml_function_coverage=1 00:08:01.718 --rc genhtml_legend=1 00:08:01.718 --rc geninfo_all_blocks=1 00:08:01.718 --rc geninfo_unexecuted_blocks=1 00:08:01.718 00:08:01.718 ' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.718 --rc genhtml_branch_coverage=1 00:08:01.718 --rc genhtml_function_coverage=1 00:08:01.718 --rc genhtml_legend=1 00:08:01.718 --rc geninfo_all_blocks=1 00:08:01.718 --rc geninfo_unexecuted_blocks=1 00:08:01.718 00:08:01.718 ' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.718 --rc genhtml_branch_coverage=1 00:08:01.718 --rc genhtml_function_coverage=1 00:08:01.718 --rc genhtml_legend=1 00:08:01.718 --rc geninfo_all_blocks=1 00:08:01.718 --rc geninfo_unexecuted_blocks=1 00:08:01.718 00:08:01.718 ' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.718 --rc genhtml_branch_coverage=1 00:08:01.718 --rc genhtml_function_coverage=1 00:08:01.718 --rc genhtml_legend=1 00:08:01.718 --rc geninfo_all_blocks=1 00:08:01.718 --rc geninfo_unexecuted_blocks=1 00:08:01.718 00:08:01.718 ' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.718 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.719 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:09.968 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:09.968 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:09.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:09.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:09.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.969 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:08:09.969 00:08:09.969 --- 10.0.0.2 ping statistics --- 00:08:09.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.969 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:08:09.969 00:08:09.969 --- 10.0.0.1 ping statistics --- 00:08:09.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.969 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3086912 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3086912 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3086912 ']' 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.969 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.969 [2024-11-06 10:50:00.399204] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:08:09.969 [2024-11-06 10:50:00.399269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.969 [2024-11-06 10:50:00.502091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.969 [2024-11-06 10:50:00.551630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.969 [2024-11-06 10:50:00.551684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.969 [2024-11-06 10:50:00.551693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.969 [2024-11-06 10:50:00.551706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.969 [2024-11-06 10:50:00.551713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.969 [2024-11-06 10:50:00.552508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.969 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.969 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:09.969 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.969 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 [2024-11-06 10:50:01.261436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 Malloc0 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 [2024-11-06 10:50:01.322724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3087245 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3087245 /var/tmp/bdevperf.sock 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3087245 ']' 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.970 10:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 [2024-11-06 10:50:01.380439] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:08:09.970 [2024-11-06 10:50:01.380501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087245 ] 00:08:10.232 [2024-11-06 10:50:01.455567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.232 [2024-11-06 10:50:01.497422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.801 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.801 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:10.801 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:10.801 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.801 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 NVMe0n1 00:08:11.062 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.062 10:50:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.062 Running I/O for 10 seconds... 00:08:13.389 8868.00 IOPS, 34.64 MiB/s [2024-11-06T09:50:05.751Z] 9829.00 IOPS, 38.39 MiB/s [2024-11-06T09:50:06.693Z] 10583.67 IOPS, 41.34 MiB/s [2024-11-06T09:50:07.634Z] 10912.50 IOPS, 42.63 MiB/s [2024-11-06T09:50:08.575Z] 11067.60 IOPS, 43.23 MiB/s [2024-11-06T09:50:09.517Z] 11259.00 IOPS, 43.98 MiB/s [2024-11-06T09:50:10.902Z] 11329.71 IOPS, 44.26 MiB/s [2024-11-06T09:50:11.843Z] 11388.25 IOPS, 44.49 MiB/s [2024-11-06T09:50:12.786Z] 11412.89 IOPS, 44.58 MiB/s [2024-11-06T09:50:12.786Z] 11464.40 IOPS, 44.78 MiB/s 00:08:21.364 Latency(us) 00:08:21.364 [2024-11-06T09:50:12.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.364 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:21.364 Verification LBA range: start 0x0 length 0x4000 00:08:21.364 NVMe0n1 : 10.07 11487.94 44.87 0.00 0.00 88838.03 24248.32 72526.51 00:08:21.364 [2024-11-06T09:50:12.786Z] =================================================================================================================== 00:08:21.364 [2024-11-06T09:50:12.786Z] Total : 11487.94 44.87 0.00 0.00 88838.03 24248.32 72526.51 00:08:21.364 { 00:08:21.364 "results": [ 00:08:21.364 { 00:08:21.364 "job": "NVMe0n1", 00:08:21.364 "core_mask": "0x1", 00:08:21.364 "workload": "verify", 00:08:21.364 "status": "finished", 00:08:21.364 "verify_range": { 00:08:21.364 "start": 0, 00:08:21.364 "length": 16384 00:08:21.364 }, 00:08:21.364 "queue_depth": 1024, 00:08:21.364 "io_size": 4096, 00:08:21.364 "runtime": 10.068647, 00:08:21.364 "iops": 11487.938746884263, 00:08:21.364 "mibps": 44.874760730016654, 00:08:21.364 "io_failed": 0, 00:08:21.364 "io_timeout": 0, 00:08:21.364 "avg_latency_us": 88838.02600823044, 00:08:21.364 "min_latency_us": 24248.32, 00:08:21.364 "max_latency_us": 72526.50666666667 00:08:21.364 } 00:08:21.364 ], 00:08:21.364 "core_count": 1 00:08:21.364 } 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3087245 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3087245 ']' 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3087245 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3087245 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3087245' 00:08:21.364 killing process with pid 3087245 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3087245 00:08:21.364 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.364 00:08:21.364 Latency(us) 00:08:21.364 [2024-11-06T09:50:12.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.364 [2024-11-06T09:50:12.786Z] =================================================================================================================== 00:08:21.364 [2024-11-06T09:50:12.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3087245 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.364 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.625 rmmod nvme_tcp 00:08:21.625 rmmod nvme_fabrics 00:08:21.625 rmmod nvme_keyring 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3086912 ']' 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3086912 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3086912 ']' 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3086912 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3086912 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3086912' 00:08:21.625 killing process with pid 3086912 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3086912 00:08:21.625 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3086912 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.625 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.169 00:08:24.169 real 0m22.375s 00:08:24.169 user 0m25.947s 00:08:24.169 sys 0m6.768s 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.169 ************************************ 00:08:24.169 END TEST nvmf_queue_depth 00:08:24.169 ************************************ 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.169 ************************************ 00:08:24.169 START TEST nvmf_target_multipath 00:08:24.169 ************************************ 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:24.169 * Looking for test storage... 00:08:24.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:24.169 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:24.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.170 --rc genhtml_branch_coverage=1 00:08:24.170 --rc genhtml_function_coverage=1 00:08:24.170 --rc genhtml_legend=1 00:08:24.170 --rc geninfo_all_blocks=1 00:08:24.170 --rc geninfo_unexecuted_blocks=1 00:08:24.170 00:08:24.170 ' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:24.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.170 --rc genhtml_branch_coverage=1 00:08:24.170 --rc genhtml_function_coverage=1 00:08:24.170 --rc genhtml_legend=1 00:08:24.170 --rc geninfo_all_blocks=1 00:08:24.170 --rc geninfo_unexecuted_blocks=1 00:08:24.170 00:08:24.170 ' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:24.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.170 --rc genhtml_branch_coverage=1 00:08:24.170 --rc genhtml_function_coverage=1 00:08:24.170 --rc genhtml_legend=1 00:08:24.170 --rc geninfo_all_blocks=1 00:08:24.170 --rc geninfo_unexecuted_blocks=1 00:08:24.170 00:08:24.170 ' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:24.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.170 --rc genhtml_branch_coverage=1 00:08:24.170 --rc genhtml_function_coverage=1 00:08:24.170 --rc genhtml_legend=1 00:08:24.170 --rc geninfo_all_blocks=1 00:08:24.170 --rc geninfo_unexecuted_blocks=1 00:08:24.170 00:08:24.170 ' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.170 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.171 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.319 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:32.320 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:32.320 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:32.320 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:32.320 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:08:32.320 00:08:32.320 --- 10.0.0.2 ping statistics --- 00:08:32.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.320 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:08:32.320 00:08:32.320 --- 10.0.0.1 ping statistics --- 00:08:32.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.320 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:32.320 only one NIC for nvmf test 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.320 rmmod nvme_tcp 00:08:32.320 rmmod nvme_fabrics 00:08:32.320 rmmod nvme_keyring 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.320 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.321 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.321 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:32.321 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.321 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.321 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.706 00:08:33.706 real 0m9.645s 00:08:33.706 user 0m2.141s 00:08:33.706 sys 0m5.452s 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.706 ************************************ 00:08:33.706 END TEST nvmf_target_multipath 00:08:33.706 ************************************ 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.706 ************************************ 00:08:33.706 START TEST nvmf_zcopy 00:08:33.706 ************************************ 00:08:33.706 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.706 * Looking for test storage... 00:08:33.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.706 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.707 --rc genhtml_branch_coverage=1 00:08:33.707 --rc genhtml_function_coverage=1 00:08:33.707 --rc genhtml_legend=1 00:08:33.707 --rc geninfo_all_blocks=1 00:08:33.707 --rc geninfo_unexecuted_blocks=1 00:08:33.707 00:08:33.707 ' 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.707 --rc genhtml_branch_coverage=1 00:08:33.707 --rc genhtml_function_coverage=1 00:08:33.707 --rc genhtml_legend=1 00:08:33.707 --rc geninfo_all_blocks=1 00:08:33.707 --rc geninfo_unexecuted_blocks=1 00:08:33.707 00:08:33.707 ' 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.707 --rc genhtml_branch_coverage=1 00:08:33.707 --rc genhtml_function_coverage=1 00:08:33.707 --rc genhtml_legend=1 00:08:33.707 --rc geninfo_all_blocks=1 00:08:33.707 --rc geninfo_unexecuted_blocks=1 00:08:33.707 00:08:33.707 ' 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.707 --rc genhtml_branch_coverage=1 00:08:33.707 --rc genhtml_function_coverage=1 00:08:33.707 --rc genhtml_legend=1 00:08:33.707 --rc geninfo_all_blocks=1 00:08:33.707 --rc geninfo_unexecuted_blocks=1 00:08:33.707 00:08:33.707 ' 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.707 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.970 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:42.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:42.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:42.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:42.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:08:42.119 00:08:42.119 --- 10.0.0.2 ping statistics --- 00:08:42.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.119 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:08:42.119 00:08:42.119 --- 10.0.0.1 ping statistics --- 00:08:42.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.119 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:08:42.119 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3097955 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3097955 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3097955 ']' 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.120 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 [2024-11-06 10:50:32.619665] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:08:42.120 [2024-11-06 10:50:32.619734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.120 [2024-11-06 10:50:32.718703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.120 [2024-11-06 10:50:32.768620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.120 [2024-11-06 10:50:32.768682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.120 [2024-11-06 10:50:32.768691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.120 [2024-11-06 10:50:32.768698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.120 [2024-11-06 10:50:32.768704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.120 [2024-11-06 10:50:32.769494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 [2024-11-06 10:50:33.474710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 [2024-11-06 10:50:33.498995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.120 malloc0 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.120 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:42.382 { 00:08:42.382 "params": { 00:08:42.382 "name": "Nvme$subsystem", 00:08:42.382 "trtype": "$TEST_TRANSPORT", 00:08:42.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.382 "adrfam": "ipv4", 00:08:42.382 "trsvcid": "$NVMF_PORT", 00:08:42.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.382 "hdgst": ${hdgst:-false}, 00:08:42.382 "ddgst": ${ddgst:-false} 00:08:42.382 }, 00:08:42.382 "method": "bdev_nvme_attach_controller" 00:08:42.382 } 00:08:42.382 EOF 00:08:42.382 )") 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:42.382 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:42.382 "params": { 00:08:42.382 "name": "Nvme1", 00:08:42.382 "trtype": "tcp", 00:08:42.382 "traddr": "10.0.0.2", 00:08:42.382 "adrfam": "ipv4", 00:08:42.382 "trsvcid": "4420", 00:08:42.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.382 "hdgst": false, 00:08:42.382 "ddgst": false 00:08:42.382 }, 00:08:42.382 "method": "bdev_nvme_attach_controller" 00:08:42.382 }' 00:08:42.382 [2024-11-06 10:50:33.600932] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:08:42.382 [2024-11-06 10:50:33.600996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098018 ] 00:08:42.382 [2024-11-06 10:50:33.675921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.383 [2024-11-06 10:50:33.718110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.643 Running I/O for 10 seconds... 00:08:44.972 6887.00 IOPS, 53.80 MiB/s [2024-11-06T09:50:37.335Z] 8311.00 IOPS, 64.93 MiB/s [2024-11-06T09:50:38.277Z] 8797.67 IOPS, 68.73 MiB/s [2024-11-06T09:50:39.220Z] 9044.00 IOPS, 70.66 MiB/s [2024-11-06T09:50:40.161Z] 9191.40 IOPS, 71.81 MiB/s [2024-11-06T09:50:41.105Z] 9287.00 IOPS, 72.55 MiB/s [2024-11-06T09:50:42.049Z] 9355.14 IOPS, 73.09 MiB/s [2024-11-06T09:50:43.436Z] 9410.00 IOPS, 73.52 MiB/s [2024-11-06T09:50:44.380Z] 9449.33 IOPS, 73.82 MiB/s [2024-11-06T09:50:44.380Z] 9481.60 IOPS, 74.08 MiB/s 00:08:52.958 Latency(us) 00:08:52.958 [2024-11-06T09:50:44.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.958 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:52.958 Verification LBA range: start 0x0 length 0x1000 00:08:52.958 Nvme1n1 : 10.01 9482.01 74.08 0.00 0.00 13449.04 1870.51 27306.67 00:08:52.958 [2024-11-06T09:50:44.380Z] =================================================================================================================== 00:08:52.958 [2024-11-06T09:50:44.380Z] Total : 9482.01 74.08 0.00 0.00 13449.04 1870.51 27306.67 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3100253 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.958 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.958 { 00:08:52.958 "params": { 00:08:52.958 "name": "Nvme$subsystem", 00:08:52.959 "trtype": "$TEST_TRANSPORT", 00:08:52.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.959 "adrfam": "ipv4", 00:08:52.959 "trsvcid": "$NVMF_PORT", 00:08:52.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.959 "hdgst": ${hdgst:-false}, 00:08:52.959 "ddgst": ${ddgst:-false} 00:08:52.959 }, 00:08:52.959 "method": "bdev_nvme_attach_controller" 00:08:52.959 } 00:08:52.959 EOF 00:08:52.959 )") 00:08:52.959 [2024-11-06 10:50:44.143874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.143901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:52.959 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:52.959 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:52.959 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.959 "params": { 00:08:52.959 "name": "Nvme1", 00:08:52.959 "trtype": "tcp", 00:08:52.959 "traddr": "10.0.0.2", 00:08:52.959 "adrfam": "ipv4", 00:08:52.959 "trsvcid": "4420", 00:08:52.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.959 "hdgst": false, 00:08:52.959 "ddgst": false 00:08:52.959 }, 00:08:52.959 "method": "bdev_nvme_attach_controller" 00:08:52.959 }' 00:08:52.959 [2024-11-06 10:50:44.155872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.155881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.167900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.167908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.179930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.179938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.186942] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:08:52.959 [2024-11-06 10:50:44.186989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100253 ] 00:08:52.959 [2024-11-06 10:50:44.191960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.191968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.203991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.203998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.216020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.216027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.228050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.228058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.240082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.240089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.252114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.252121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.256620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.959 [2024-11-06 10:50:44.264147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.264160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.276178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.276186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.288209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.288217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.291965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.959 [2024-11-06 10:50:44.300239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.300247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.312276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.312288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.324304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.324316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.336334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.336342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.348375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.348383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.360395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.360402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.959 [2024-11-06 10:50:44.372434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.959 [2024-11-06 10:50:44.372452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.220 [2024-11-06 10:50:44.384460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.384470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.396491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.396501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.408522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.408529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.420553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.420559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.432586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.432594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.444619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.444628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.456647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.456656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.468687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.468703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 Running I/O for 5 seconds... 00:08:53.221 [2024-11-06 10:50:44.480709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.480717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.495717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.495735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.509155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.509171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.522729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.522745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.536144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.536159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.549315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.549331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.561991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.562006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.574471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.574487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.588352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.588368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.601717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.601732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.615427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.615442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.221 [2024-11-06 10:50:44.628938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.221 [2024-11-06 10:50:44.628954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.642392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.642408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.655099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.655114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.667482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.667497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.680275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.680290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.692690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.692705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.705884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.705899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.719382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.719397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.732281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.732300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.745674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.745689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.759036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.759051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.772025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.772040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.785661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.785676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.799010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.799025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.811630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.811645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.824009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.824024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.837201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.837217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.850998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.851013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.864071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.864087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.877662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.877676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.483 [2024-11-06 10:50:44.890752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.483 [2024-11-06 10:50:44.890767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-06 10:50:44.903795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-06 10:50:44.903810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-06 10:50:44.917159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-06 10:50:44.917173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-06 10:50:44.929843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-06 10:50:44.929858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-06 10:50:44.943511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-06 10:50:44.943526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-06 10:50:44.956172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-06 10:50:44.956187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-06 10:50:44.970133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-06 10:50:44.970148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:44.983892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:44.983911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:44.996973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:44.996988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.010177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.010192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.023656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.023671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.037077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.037092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.050413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.050428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.063953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.063968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.077486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.077500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.090450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.090464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.103668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.103682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.117345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.117360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.130570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.130585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.143962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.143978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.745 [2024-11-06 10:50:45.156571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.745 [2024-11-06 10:50:45.156585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.168862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.168877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.181378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.181392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.194029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.194044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.207034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.207049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.219896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.219911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.232940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.232962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.245585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.245600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.257941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.257956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.270603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.270618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.284576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.284591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.297356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.297371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.310892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.310907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.324324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.324340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.337335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.337350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.349765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.349781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.363501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.363516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.377040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.377056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.389608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.389623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.402352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.402369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.006 [2024-11-06 10:50:45.414810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.006 [2024-11-06 10:50:45.414825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.428297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.428312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.441176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.441190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.454447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.454462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.467740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.467760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.480737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.480761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 18993.00 IOPS, 148.38 MiB/s [2024-11-06T09:50:45.690Z] [2024-11-06 10:50:45.494208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.494223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.507030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.507045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.520292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.520306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.533291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.533305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.545850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.545865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.559543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.559558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.572634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.572649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.584973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.584988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.598547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.598562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.612139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.612154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.625399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.625414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.639028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.639043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.651620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.651635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.665158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.665174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.268 [2024-11-06 10:50:45.678638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.268 [2024-11-06 10:50:45.678653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.691407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.691422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.704585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.704601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.716842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.716857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.730388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.730403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.743989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.744004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.756994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.757009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.770623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.770638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.783652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.783667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.796484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.796499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.808708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.808724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.821503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.821518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.834981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.834996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.529 [2024-11-06 10:50:45.847840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.529 [2024-11-06 10:50:45.847855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.530 [2024-11-06 10:50:45.861415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.530 [2024-11-06 10:50:45.861431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.530 [2024-11-06 10:50:45.874393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.530 [2024-11-06 10:50:45.874408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.530 [2024-11-06 10:50:45.887504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.530 [2024-11-06 10:50:45.887520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.530 [2024-11-06 10:50:45.900416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.530 [2024-11-06 10:50:45.900431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.530 [2024-11-06 10:50:45.912825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.530 [2024-11-06 10:50:45.912840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.530 [2024-11-06 10:50:45.925511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.530 [2024-11-06 10:50:45.925527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.530 [2024-11-06 10:50:45.938250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.530 [2024-11-06 10:50:45.938267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:45.950674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:45.950689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:45.963820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:45.963835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:45.976934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:45.976949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:45.989910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:45.989924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.003350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.003364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.016635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.016650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.030561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.030576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.043282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.043297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.056676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.056691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.070138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.070152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.082793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.082808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.095167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.095181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.108939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.108954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.121678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.121693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.135361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.791 [2024-11-06 10:50:46.135375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.791 [2024-11-06 10:50:46.148308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.792 [2024-11-06 10:50:46.148323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.792 [2024-11-06 10:50:46.161196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.792 [2024-11-06 10:50:46.161212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.792 [2024-11-06 10:50:46.174383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.792 [2024-11-06 10:50:46.174398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.792 [2024-11-06 10:50:46.187840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.792 [2024-11-06 10:50:46.187855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.792 [2024-11-06 10:50:46.201195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.792 [2024-11-06 10:50:46.201210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.052 [2024-11-06 10:50:46.214566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.052 [2024-11-06 10:50:46.214584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.052 [2024-11-06 10:50:46.228069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.052 [2024-11-06 10:50:46.228084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.052 [2024-11-06 10:50:46.241451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.052 [2024-11-06 10:50:46.241465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.254195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.254210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.267696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.267710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.280774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.280789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.294023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.294038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.307609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.307623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.320329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.320344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.333234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.333248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.346507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.346522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.359192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.359207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.371644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.371659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.383935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.383950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.396773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.396788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.410275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.410290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.424230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.424244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.437550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.437564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.450645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.450660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-06 10:50:46.463395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-06 10:50:46.463414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-06 10:50:46.475625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.475641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 19105.00 IOPS, 149.26 MiB/s [2024-11-06T09:50:46.737Z] [2024-11-06 10:50:46.488782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.488796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.501672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.501686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.515006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.515022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.528901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.528916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.542182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.542197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.555639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.555654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.569210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.569225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.581847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.581861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.594737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.594758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.607216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.607230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.619814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.619829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.633390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.633405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.646234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.646250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.659557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.659573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.673016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.673032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.685734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.685754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.698513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.698528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.711443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.711463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.315 [2024-11-06 10:50:46.724772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.315 [2024-11-06 10:50:46.724786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.738213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.738229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.751328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.751343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.763912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.763927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.776106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.776121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.788984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.788999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.802154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.802168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.815342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.815357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.829047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.829061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.576 [2024-11-06 10:50:46.841388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.576 [2024-11-06 10:50:46.841403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.854175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.854189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.867534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.867549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.880404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.880419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.894020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.894034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.907604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.907619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.920787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.920802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.934347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.934361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.947644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.947659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.960769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.960788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.973374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.973389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.577 [2024-11-06 10:50:46.985700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.577 [2024-11-06 10:50:46.985715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:46.998287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:46.998303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.010951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.010966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.024490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.024505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.037596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.037611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.050543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.050559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.063550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.063565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.076226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.076241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.088734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.088754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.101490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.101506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.114318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.114333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.126981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.126996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.140250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.140265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.153683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.153698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.167168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.167183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.180711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.180726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.194155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.194170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.207256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.207271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.220592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.838 [2024-11-06 10:50:47.220608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.838 [2024-11-06 10:50:47.233795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.839 [2024-11-06 10:50:47.233810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.839 [2024-11-06 10:50:47.246543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.839 [2024-11-06 10:50:47.246558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.259940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.259956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.273362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.273377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.286947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.286962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.300180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.300195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.312864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.312879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.325549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.325564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.339200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.339215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.352547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.352562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.365200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.365215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.377948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.377963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.391279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.391294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.404781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.404796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.418416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.418431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.431142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.431157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.443891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.443906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.456038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.456053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.469054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.469069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.482518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.482533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 19146.00 IOPS, 149.58 MiB/s [2024-11-06T09:50:47.522Z] [2024-11-06 10:50:47.495536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.495551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.100 [2024-11-06 10:50:47.508734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.100 [2024-11-06 10:50:47.508754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.361 [2024-11-06 10:50:47.521878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.521893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.535107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.535123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.547611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.547626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.560870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.560885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.573772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.573788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.587126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.587141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.599856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.599871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.612320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.612335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.625625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.625640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.639002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.639016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.651760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.651775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.664828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.664843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.677587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.677601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.690599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.690617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.703757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.703772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.716891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.716906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.730020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.730035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.743489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.743504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.756611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.756625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.362 [2024-11-06 10:50:47.769765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.362 [2024-11-06 10:50:47.769780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.783372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.783387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.796214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.796229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.809302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.809317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.821722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.821737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.835323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.835338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.849164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.849179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.862594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.862609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.875766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.875780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.889023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.889037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.902559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.902574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.915401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.915416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.928971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.928985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.941347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.941365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.954003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.954017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.967493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.967507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.981074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.981089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:47.993999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:47.994014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:48.006680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:48.006694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:48.019917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:48.019932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.624 [2024-11-06 10:50:48.032927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.624 [2024-11-06 10:50:48.032941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.045980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.045995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.059217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.059232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.072830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.072845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.085352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.085367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.098390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.098404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.111428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.111442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.125174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.125190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.137665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.137680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.150136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.150151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.162993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.163007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.176654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.176668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.190152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.190171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.202684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.202700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.215282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.215297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.227757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.227772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.241050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.241064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.254577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-06 10:50:48.254592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-06 10:50:48.267548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.886 [2024-11-06 10:50:48.267563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.886 [2024-11-06 10:50:48.280363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.886 [2024-11-06 10:50:48.280378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.886 [2024-11-06 10:50:48.293440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.886 [2024-11-06 10:50:48.293457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.306825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.306840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.319869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.319884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.333328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.333343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.345929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.345944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.358842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.358857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.371873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.371888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.385244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.385258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.398298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.398313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.411753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.411767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.424604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.424618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.437199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.437218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.450706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.450721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.464501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.464516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.477369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.477383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.490340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.490355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 19164.00 IOPS, 149.72 MiB/s [2024-11-06T09:50:48.569Z] [2024-11-06 10:50:48.503311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.503325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.516185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.516200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.529076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.529091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.147 [2024-11-06 10:50:48.542108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.147 [2024-11-06 10:50:48.542123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.148 [2024-11-06 10:50:48.555296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.148 [2024-11-06 10:50:48.555310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.568849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.568864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.582228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.582242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.595856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.595870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.608652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.608667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.620893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.620908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.633498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.633513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.645956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.645971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.659118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.659134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.671245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.671260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.684618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.684633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.697605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.697620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.710340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.710357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.723685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.723701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.736717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.736733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.749775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.749790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.763212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.763227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.776681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.776695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.789123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.789138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.802694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.802708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.815102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.815117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.410 [2024-11-06 10:50:48.828584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.410 [2024-11-06 10:50:48.828599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.841851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.841866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.854853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.854868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.867131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.867145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.879879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.879895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.892682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.892698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.905405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.905420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.918127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.918142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.931075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.931091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.943605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.943621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.956845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.956860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.970290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.970305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.983148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.983163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:48.996627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:48.996643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:49.009678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:49.009693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:49.022425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.672 [2024-11-06 10:50:49.022440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.672 [2024-11-06 10:50:49.034971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.673 [2024-11-06 10:50:49.034986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.673 [2024-11-06 10:50:49.047744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.673 [2024-11-06 10:50:49.047764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.673 [2024-11-06 10:50:49.060792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.673 [2024-11-06 10:50:49.060807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.673 [2024-11-06 10:50:49.073398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.673 [2024-11-06 10:50:49.073413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.673 [2024-11-06 10:50:49.086295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.673 [2024-11-06 10:50:49.086310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.099615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.099630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.112796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.112811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.125879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.125894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.139275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.139291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.152206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.152221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.165312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.165327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.178651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.178666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.192213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.192229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.205171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.205186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.218472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.218488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.231656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.231671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.244566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.244581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.257953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.257967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.271412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.271427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.284097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.284112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.296536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.296551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.309999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.310014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.323915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.323930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.337463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.337478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.934 [2024-11-06 10:50:49.350108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.934 [2024-11-06 10:50:49.350123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.362899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.362914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.376074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.376089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.389367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.389382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.402838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.402853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.416525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.416544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.429358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.429372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.442587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.442602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.455993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.456007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.468417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.468432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.481730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.481749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 19177.60 IOPS, 149.82 MiB/s [2024-11-06T09:50:49.618Z] [2024-11-06 10:50:49.494369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.494384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 00:08:58.196 Latency(us) 00:08:58.196 [2024-11-06T09:50:49.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.196 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:58.196 Nvme1n1 : 5.01 19179.31 149.84 0.00 0.00 6666.79 3017.39 15619.41 00:08:58.196 [2024-11-06T09:50:49.618Z] =================================================================================================================== 00:08:58.196 [2024-11-06T09:50:49.618Z] Total : 19179.31 149.84 0.00 0.00 6666.79 3017.39 15619.41 00:08:58.196 [2024-11-06 10:50:49.503657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.503671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.515686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.515698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.527719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.527733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.539751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.539764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.551781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.196 [2024-11-06 10:50:49.551793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.196 [2024-11-06 10:50:49.563808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.197 [2024-11-06 10:50:49.563818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.197 [2024-11-06 10:50:49.575838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.197 [2024-11-06 10:50:49.575845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.197 [2024-11-06 10:50:49.587872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.197 [2024-11-06 10:50:49.587881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.197 [2024-11-06 10:50:49.599902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.197 [2024-11-06 10:50:49.599911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.197 [2024-11-06 10:50:49.611931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.197 [2024-11-06 10:50:49.611943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3100253) - No such process 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3100253 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.457 delay0 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.457 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:58.457 [2024-11-06 10:50:49.730963] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:05.048 Initializing NVMe Controllers 00:09:05.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:05.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:05.048 Initialization complete. Launching workers. 00:09:05.048 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4574 00:09:05.048 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4850, failed to submit 44 00:09:05.048 success 4685, unsuccessful 165, failed 0 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.048 rmmod nvme_tcp 00:09:05.048 rmmod nvme_fabrics 00:09:05.048 rmmod nvme_keyring 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3097955 ']' 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3097955 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3097955 ']' 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3097955 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3097955 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3097955' 00:09:05.048 killing process with pid 3097955 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3097955 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3097955 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.048 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.309 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.309 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.309 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.309 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.309 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.283 00:09:07.283 real 0m33.630s 00:09:07.283 user 0m45.027s 00:09:07.283 sys 0m10.683s 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.283 ************************************ 00:09:07.283 END TEST nvmf_zcopy 00:09:07.283 ************************************ 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.283 ************************************ 00:09:07.283 START TEST nvmf_nmic 00:09:07.283 ************************************ 00:09:07.283 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.602 * Looking for test storage... 00:09:07.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.602 --rc genhtml_branch_coverage=1 00:09:07.602 --rc genhtml_function_coverage=1 00:09:07.602 --rc genhtml_legend=1 00:09:07.602 --rc geninfo_all_blocks=1 00:09:07.602 --rc geninfo_unexecuted_blocks=1 00:09:07.602 00:09:07.602 ' 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.602 --rc genhtml_branch_coverage=1 00:09:07.602 --rc genhtml_function_coverage=1 00:09:07.602 --rc genhtml_legend=1 00:09:07.602 --rc geninfo_all_blocks=1 00:09:07.602 --rc geninfo_unexecuted_blocks=1 00:09:07.602 00:09:07.602 ' 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.602 --rc genhtml_branch_coverage=1 00:09:07.602 --rc genhtml_function_coverage=1 00:09:07.602 --rc genhtml_legend=1 00:09:07.602 --rc geninfo_all_blocks=1 00:09:07.602 --rc geninfo_unexecuted_blocks=1 00:09:07.602 00:09:07.602 ' 00:09:07.602 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.602 --rc genhtml_branch_coverage=1 00:09:07.603 --rc genhtml_function_coverage=1 00:09:07.603 --rc genhtml_legend=1 00:09:07.603 --rc geninfo_all_blocks=1 00:09:07.603 --rc geninfo_unexecuted_blocks=1 00:09:07.603 00:09:07.603 ' 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.603 10:50:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:15.753 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:15.753 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.753 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:15.754 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:15.754 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:15.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:09:15.754 00:09:15.754 --- 10.0.0.2 ping statistics --- 00:09:15.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.754 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:09:15.754 00:09:15.754 --- 10.0.0.1 ping statistics --- 00:09:15.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.754 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3107030 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3107030 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3107030 ']' 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:15.754 10:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.754 [2024-11-06 10:51:06.452913] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:09:15.754 [2024-11-06 10:51:06.452980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.754 [2024-11-06 10:51:06.537164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.754 [2024-11-06 10:51:06.580751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.754 [2024-11-06 10:51:06.580792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.754 [2024-11-06 10:51:06.580801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.754 [2024-11-06 10:51:06.580808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.754 [2024-11-06 10:51:06.580813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.754 [2024-11-06 10:51:06.582403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.754 [2024-11-06 10:51:06.582522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.754 [2024-11-06 10:51:06.582680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.754 [2024-11-06 10:51:06.582681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 [2024-11-06 10:51:07.309307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 Malloc0 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 [2024-11-06 10:51:07.384058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:16.016 test case1: single bdev can't be used in multiple subsystems 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.016 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.016 [2024-11-06 10:51:07.419976] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:16.016 [2024-11-06 10:51:07.419995] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:16.016 [2024-11-06 10:51:07.420004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 request: 00:09:16.016 { 00:09:16.016 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:16.016 "namespace": { 00:09:16.016 "bdev_name": "Malloc0", 00:09:16.016 "no_auto_visible": false 00:09:16.016 }, 00:09:16.016 "method": "nvmf_subsystem_add_ns", 00:09:16.016 "req_id": 1 00:09:16.016 } 00:09:16.017 Got JSON-RPC error response 00:09:16.017 response: 00:09:16.017 { 00:09:16.017 "code": -32602, 00:09:16.017 "message": "Invalid parameters" 00:09:16.017 } 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:16.017 Adding namespace failed - expected result. 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:16.017 test case2: host connect to nvmf target in multiple paths 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.017 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.017 [2024-11-06 10:51:07.432112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:16.277 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.277 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:19.571 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.571 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:19.571 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.571 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:19.571 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:21.500 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:21.500 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:21.500 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.500 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:21.500 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.500 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:21.500 10:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:21.500 [global] 00:09:21.500 thread=1 00:09:21.500 invalidate=1 00:09:21.500 rw=write 00:09:21.500 time_based=1 00:09:21.500 runtime=1 00:09:21.500 ioengine=libaio 00:09:21.500 direct=1 00:09:21.500 bs=4096 00:09:21.500 iodepth=1 00:09:21.500 norandommap=0 00:09:21.500 numjobs=1 00:09:21.500 00:09:21.500 verify_dump=1 00:09:21.500 verify_backlog=512 00:09:21.500 verify_state_save=0 00:09:21.500 do_verify=1 00:09:21.500 verify=crc32c-intel 00:09:21.500 [job0] 00:09:21.500 filename=/dev/nvme0n1 00:09:21.500 Could not set queue depth (nvme0n1) 00:09:21.763 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.763 fio-3.35 00:09:21.763 Starting 1 thread 00:09:23.146 00:09:23.146 job0: (groupid=0, jobs=1): err= 0: pid=3108877: Wed Nov 6 10:51:14 2024 00:09:23.146 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:09:23.146 slat (nsec): min=10299, max=26083, avg=24764.18, stdev=3730.41 00:09:23.146 clat (usec): min=40961, max=43035, avg=42209.56, stdev=653.99 00:09:23.146 lat (usec): min=40986, max=43060, avg=42234.32, stdev=654.03 00:09:23.146 clat percentiles (usec): 00:09:23.146 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:23.146 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:23.146 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:09:23.146 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:23.146 | 99.99th=[43254] 00:09:23.146 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:23.146 slat (usec): min=10, max=2345, avg=30.00, stdev=103.52 00:09:23.146 clat (usec): min=292, max=1891, avg=557.73, stdev=128.70 00:09:23.146 lat (usec): min=303, max=2956, avg=587.72, stdev=169.02 00:09:23.146 clat percentiles (usec): 00:09:23.146 | 1.00th=[ 334], 5.00th=[ 367], 10.00th=[ 396], 20.00th=[ 453], 00:09:23.146 | 30.00th=[ 482], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 594], 00:09:23.146 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 717], 00:09:23.146 | 99.00th=[ 791], 99.50th=[ 857], 99.90th=[ 1893], 99.95th=[ 1893], 00:09:23.146 | 99.99th=[ 1893] 00:09:23.146 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:23.146 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:23.146 lat (usec) : 500=32.51%, 750=61.81%, 1000=2.08% 00:09:23.146 lat (msec) : 2=0.38%, 50=3.21% 00:09:23.146 cpu : usr=1.08%, sys=0.78%, ctx=532, majf=0, minf=1 00:09:23.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.146 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.146 00:09:23.146 Run status group 0 (all jobs): 00:09:23.146 READ: bw=66.5KiB/s (68.1kB/s), 66.5KiB/s-66.5KiB/s (68.1kB/s-68.1kB/s), io=68.0KiB (69.6kB), run=1022-1022msec 00:09:23.146 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:09:23.146 00:09:23.146 Disk stats (read/write): 00:09:23.146 nvme0n1: ios=71/512, merge=0/0, ticks=799/288, in_queue=1087, util=98.90% 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.146 rmmod nvme_tcp 00:09:23.146 rmmod nvme_fabrics 00:09:23.146 rmmod nvme_keyring 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3107030 ']' 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3107030 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3107030 ']' 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3107030 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3107030 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3107030' 00:09:23.146 killing process with pid 3107030 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3107030 00:09:23.146 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3107030 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.407 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.321 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.321 00:09:25.321 real 0m18.046s 00:09:25.321 user 0m48.588s 00:09:25.321 sys 0m6.496s 00:09:25.321 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:25.321 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.321 ************************************ 00:09:25.321 END TEST nvmf_nmic 00:09:25.321 ************************************ 00:09:25.321 10:51:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.321 10:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:25.321 10:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.321 10:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.584 ************************************ 00:09:25.584 START TEST nvmf_fio_target 00:09:25.584 ************************************ 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.584 * Looking for test storage... 00:09:25.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:25.584 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:25.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.585 --rc genhtml_branch_coverage=1 00:09:25.585 --rc genhtml_function_coverage=1 00:09:25.585 --rc genhtml_legend=1 00:09:25.585 --rc geninfo_all_blocks=1 00:09:25.585 --rc geninfo_unexecuted_blocks=1 00:09:25.585 00:09:25.585 ' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:25.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.585 --rc genhtml_branch_coverage=1 00:09:25.585 --rc genhtml_function_coverage=1 00:09:25.585 --rc genhtml_legend=1 00:09:25.585 --rc geninfo_all_blocks=1 00:09:25.585 --rc geninfo_unexecuted_blocks=1 00:09:25.585 00:09:25.585 ' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:25.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.585 --rc genhtml_branch_coverage=1 00:09:25.585 --rc genhtml_function_coverage=1 00:09:25.585 --rc genhtml_legend=1 00:09:25.585 --rc geninfo_all_blocks=1 00:09:25.585 --rc geninfo_unexecuted_blocks=1 00:09:25.585 00:09:25.585 ' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:25.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.585 --rc genhtml_branch_coverage=1 00:09:25.585 --rc genhtml_function_coverage=1 00:09:25.585 --rc genhtml_legend=1 00:09:25.585 --rc geninfo_all_blocks=1 00:09:25.585 --rc geninfo_unexecuted_blocks=1 00:09:25.585 00:09:25.585 ' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.585 10:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:33.725 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:33.725 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:33.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:33.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.725 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:09:33.726 00:09:33.726 --- 10.0.0.2 ping statistics --- 00:09:33.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.726 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:09:33.726 00:09:33.726 --- 10.0.0.1 ping statistics --- 00:09:33.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.726 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3113482 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3113482 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3113482 ']' 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:33.726 10:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.726 [2024-11-06 10:51:24.472000] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:09:33.726 [2024-11-06 10:51:24.472068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.726 [2024-11-06 10:51:24.554289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.726 [2024-11-06 10:51:24.596089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.726 [2024-11-06 10:51:24.596143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.726 [2024-11-06 10:51:24.596151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.726 [2024-11-06 10:51:24.596158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.726 [2024-11-06 10:51:24.596164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.726 [2024-11-06 10:51:24.597958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.726 [2024-11-06 10:51:24.598075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.726 [2024-11-06 10:51:24.598232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.726 [2024-11-06 10:51:24.598233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.987 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.987 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:33.987 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.987 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.987 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.987 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.987 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:34.248 [2024-11-06 10:51:25.477589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.248 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.509 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:34.509 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.509 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:34.509 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.769 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:34.769 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.029 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:35.029 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:35.289 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.289 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:35.289 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.550 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:35.550 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.810 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:35.810 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:36.070 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.330 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.330 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.330 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.330 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.590 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.850 [2024-11-06 10:51:28.029213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.850 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:36.850 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:37.115 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.025 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:39.025 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:39.025 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.025 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:39.025 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:39.025 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:40.963 10:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:40.963 10:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:40.963 10:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.963 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:40.963 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.963 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:40.963 10:51:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.963 [global] 00:09:40.963 thread=1 00:09:40.963 invalidate=1 00:09:40.963 rw=write 00:09:40.963 time_based=1 00:09:40.963 runtime=1 00:09:40.963 ioengine=libaio 00:09:40.963 direct=1 00:09:40.963 bs=4096 00:09:40.963 iodepth=1 00:09:40.963 norandommap=0 00:09:40.963 numjobs=1 00:09:40.963 00:09:40.963 verify_dump=1 00:09:40.963 verify_backlog=512 00:09:40.963 verify_state_save=0 00:09:40.963 do_verify=1 00:09:40.963 verify=crc32c-intel 00:09:40.963 [job0] 00:09:40.963 filename=/dev/nvme0n1 00:09:40.963 [job1] 00:09:40.963 filename=/dev/nvme0n2 00:09:40.963 [job2] 00:09:40.963 filename=/dev/nvme0n3 00:09:40.963 [job3] 00:09:40.963 filename=/dev/nvme0n4 00:09:40.963 Could not set queue depth (nvme0n1) 00:09:40.963 Could not set queue depth (nvme0n2) 00:09:40.963 Could not set queue depth (nvme0n3) 00:09:40.963 Could not set queue depth (nvme0n4) 00:09:41.230 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.230 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.230 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.230 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.230 fio-3.35 00:09:41.230 Starting 4 threads 00:09:42.642 00:09:42.642 job0: (groupid=0, jobs=1): err= 0: pid=3115357: Wed Nov 6 10:51:33 2024 00:09:42.642 read: IOPS=85, BW=342KiB/s (351kB/s)(344KiB/1005msec) 00:09:42.642 slat (nsec): min=6864, max=40919, avg=22892.37, stdev=7426.31 00:09:42.642 clat (usec): min=648, max=41866, avg=8795.48, stdev=16103.35 00:09:42.642 lat (usec): min=666, max=41892, avg=8818.37, stdev=16104.81 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 652], 5.00th=[ 668], 10.00th=[ 717], 20.00th=[ 775], 00:09:42.642 | 30.00th=[ 799], 40.00th=[ 840], 50.00th=[ 865], 60.00th=[ 914], 00:09:42.642 | 70.00th=[ 996], 80.00th=[ 1303], 90.00th=[41157], 95.00th=[41157], 00:09:42.642 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:42.642 | 99.99th=[41681] 00:09:42.642 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:42.642 slat (nsec): min=9415, max=51222, avg=19625.87, stdev=11240.24 00:09:42.642 clat (usec): min=113, max=871, avg=455.38, stdev=112.91 00:09:42.642 lat (usec): min=126, max=904, avg=475.01, stdev=114.42 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 255], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 351], 00:09:42.642 | 30.00th=[ 379], 40.00th=[ 424], 50.00th=[ 457], 60.00th=[ 482], 00:09:42.642 | 70.00th=[ 506], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[ 652], 00:09:42.642 | 99.00th=[ 775], 99.50th=[ 848], 99.90th=[ 873], 99.95th=[ 873], 00:09:42.642 | 99.99th=[ 873] 00:09:42.642 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.642 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.642 lat (usec) : 250=0.33%, 500=58.36%, 750=28.26%, 1000=8.86% 00:09:42.642 lat (msec) : 2=1.34%, 50=2.84% 00:09:42.642 cpu : usr=0.50%, sys=1.29%, ctx=598, majf=0, minf=1 00:09:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 issued rwts: total=86,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.642 job1: (groupid=0, jobs=1): err= 0: pid=3115377: Wed Nov 6 10:51:33 2024 00:09:42.642 read: IOPS=633, BW=2533KiB/s (2594kB/s)(2536KiB/1001msec) 00:09:42.642 slat (nsec): min=5672, max=64048, avg=23949.90, stdev=8243.17 00:09:42.642 clat (usec): min=328, max=1158, avg=691.29, stdev=101.39 00:09:42.642 lat (usec): min=356, max=1185, avg=715.24, stdev=104.52 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 449], 5.00th=[ 523], 10.00th=[ 545], 20.00th=[ 611], 00:09:42.642 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 709], 60.00th=[ 734], 00:09:42.642 | 70.00th=[ 758], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 832], 00:09:42.642 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 1156], 99.95th=[ 1156], 00:09:42.642 | 99.99th=[ 1156] 00:09:42.642 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:42.642 slat (nsec): min=9100, max=67464, avg=32870.96, stdev=8237.27 00:09:42.642 clat (usec): min=118, max=764, avg=488.90, stdev=104.61 00:09:42.642 lat (usec): min=129, max=816, avg=521.77, stdev=107.36 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 245], 5.00th=[ 289], 10.00th=[ 355], 20.00th=[ 396], 00:09:42.642 | 30.00th=[ 445], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 515], 00:09:42.642 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[ 627], 95.00th=[ 652], 00:09:42.642 | 99.00th=[ 701], 99.50th=[ 717], 99.90th=[ 750], 99.95th=[ 766], 00:09:42.642 | 99.99th=[ 766] 00:09:42.642 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.642 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.642 lat (usec) : 250=0.78%, 500=34.98%, 750=51.87%, 1000=12.24% 00:09:42.642 lat (msec) : 2=0.12% 00:09:42.642 cpu : usr=2.90%, sys=7.00%, ctx=1658, majf=0, minf=1 00:09:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 issued rwts: total=634,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.642 job2: (groupid=0, jobs=1): err= 0: pid=3115401: Wed Nov 6 10:51:33 2024 00:09:42.642 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1025msec) 00:09:42.642 slat (nsec): min=7915, max=29003, avg=27406.44, stdev=4869.39 00:09:42.642 clat (usec): min=1099, max=42038, avg=39473.27, stdev=9585.34 00:09:42.642 lat (usec): min=1127, max=42066, avg=39500.67, stdev=9585.25 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41157], 00:09:42.642 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:42.642 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:42.642 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:42.642 | 99.99th=[42206] 00:09:42.642 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:42.642 slat (nsec): min=9677, max=70265, avg=34607.25, stdev=8582.60 00:09:42.642 clat (usec): min=228, max=841, avg=569.79, stdev=117.55 00:09:42.642 lat (usec): min=262, max=877, avg=604.40, stdev=119.80 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 297], 5.00th=[ 367], 10.00th=[ 416], 20.00th=[ 469], 00:09:42.642 | 30.00th=[ 506], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 603], 00:09:42.642 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 766], 00:09:42.642 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 840], 99.95th=[ 840], 00:09:42.642 | 99.99th=[ 840] 00:09:42.642 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.642 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.642 lat (usec) : 250=0.19%, 500=27.17%, 750=62.83%, 1000=6.42% 00:09:42.642 lat (msec) : 2=0.19%, 50=3.21% 00:09:42.642 cpu : usr=1.76%, sys=1.46%, ctx=532, majf=0, minf=1 00:09:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.642 job3: (groupid=0, jobs=1): err= 0: pid=3115410: Wed Nov 6 10:51:33 2024 00:09:42.642 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:09:42.642 slat (nsec): min=9809, max=26156, avg=24838.00, stdev=3876.30 00:09:42.642 clat (usec): min=836, max=42015, avg=39533.91, stdev=9972.26 00:09:42.642 lat (usec): min=846, max=42041, avg=39558.74, stdev=9976.14 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 840], 5.00th=[ 840], 10.00th=[41681], 20.00th=[41681], 00:09:42.642 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:42.642 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:42.642 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:42.642 | 99.99th=[42206] 00:09:42.642 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:42.642 slat (nsec): min=9738, max=65933, avg=29602.28, stdev=9736.78 00:09:42.642 clat (usec): min=324, max=897, avg=617.99, stdev=106.13 00:09:42.642 lat (usec): min=336, max=931, avg=647.59, stdev=111.28 00:09:42.642 clat percentiles (usec): 00:09:42.642 | 1.00th=[ 359], 5.00th=[ 408], 10.00th=[ 465], 20.00th=[ 537], 00:09:42.642 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 668], 00:09:42.642 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 758], 00:09:42.642 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 898], 99.95th=[ 898], 00:09:42.642 | 99.99th=[ 898] 00:09:42.642 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.642 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.642 lat (usec) : 500=15.69%, 750=74.86%, 1000=6.43% 00:09:42.642 lat (msec) : 50=3.02% 00:09:42.642 cpu : usr=0.89%, sys=1.29%, ctx=530, majf=0, minf=1 00:09:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.642 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.642 00:09:42.642 Run status group 0 (all jobs): 00:09:42.642 READ: bw=2946KiB/s (3017kB/s), 67.5KiB/s-2533KiB/s (69.1kB/s-2594kB/s), io=3020KiB (3092kB), run=1001-1025msec 00:09:42.642 WRITE: bw=9990KiB/s (10.2MB/s), 1998KiB/s-4092KiB/s (2046kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1025msec 00:09:42.642 00:09:42.642 Disk stats (read/write): 00:09:42.642 nvme0n1: ios=130/512, merge=0/0, ticks=613/225, in_queue=838, util=86.47% 00:09:42.642 nvme0n2: ios=550/856, merge=0/0, ticks=377/331, in_queue=708, util=87.81% 00:09:42.642 nvme0n3: ios=70/512, merge=0/0, ticks=1000/233, in_queue=1233, util=96.49% 00:09:42.642 nvme0n4: ios=12/512, merge=0/0, ticks=463/297, in_queue=760, util=89.42% 00:09:42.643 10:51:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:42.643 [global] 00:09:42.643 thread=1 00:09:42.643 invalidate=1 00:09:42.643 rw=randwrite 00:09:42.643 time_based=1 00:09:42.643 runtime=1 00:09:42.643 ioengine=libaio 00:09:42.643 direct=1 00:09:42.643 bs=4096 00:09:42.643 iodepth=1 00:09:42.643 norandommap=0 00:09:42.643 numjobs=1 00:09:42.643 00:09:42.643 verify_dump=1 00:09:42.643 verify_backlog=512 00:09:42.643 verify_state_save=0 00:09:42.643 do_verify=1 00:09:42.643 verify=crc32c-intel 00:09:42.643 [job0] 00:09:42.643 filename=/dev/nvme0n1 00:09:42.643 [job1] 00:09:42.643 filename=/dev/nvme0n2 00:09:42.643 [job2] 00:09:42.643 filename=/dev/nvme0n3 00:09:42.643 [job3] 00:09:42.643 filename=/dev/nvme0n4 00:09:42.643 Could not set queue depth (nvme0n1) 00:09:42.643 Could not set queue depth (nvme0n2) 00:09:42.643 Could not set queue depth (nvme0n3) 00:09:42.643 Could not set queue depth (nvme0n4) 00:09:42.904 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.904 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.904 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.904 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.904 fio-3.35 00:09:42.904 Starting 4 threads 00:09:44.314 00:09:44.314 job0: (groupid=0, jobs=1): err= 0: pid=3115883: Wed Nov 6 10:51:35 2024 00:09:44.314 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:09:44.314 slat (nsec): min=25881, max=26838, avg=26320.44, stdev=243.88 00:09:44.314 clat (usec): min=976, max=42033, avg=39391.99, stdev=9596.85 00:09:44.314 lat (usec): min=1003, max=42060, avg=39418.31, stdev=9596.77 00:09:44.314 clat percentiles (usec): 00:09:44.314 | 1.00th=[ 979], 5.00th=[ 979], 10.00th=[41157], 20.00th=[41157], 00:09:44.314 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:44.314 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:44.314 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.314 | 99.99th=[42206] 00:09:44.314 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:44.314 slat (nsec): min=8881, max=51221, avg=28975.95, stdev=8663.54 00:09:44.314 clat (usec): min=316, max=900, avg=609.18, stdev=109.63 00:09:44.314 lat (usec): min=348, max=932, avg=638.16, stdev=112.79 00:09:44.314 clat percentiles (usec): 00:09:44.314 | 1.00th=[ 347], 5.00th=[ 433], 10.00th=[ 461], 20.00th=[ 510], 00:09:44.314 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 644], 00:09:44.314 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 783], 00:09:44.314 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 898], 99.95th=[ 898], 00:09:44.314 | 99.99th=[ 898] 00:09:44.314 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.314 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.314 lat (usec) : 500=17.74%, 750=68.30%, 1000=10.75% 00:09:44.314 lat (msec) : 50=3.21% 00:09:44.314 cpu : usr=1.15%, sys=1.83%, ctx=530, majf=0, minf=1 00:09:44.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.314 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.314 job1: (groupid=0, jobs=1): err= 0: pid=3115899: Wed Nov 6 10:51:35 2024 00:09:44.314 read: IOPS=593, BW=2374KiB/s (2431kB/s)(2436KiB/1026msec) 00:09:44.314 slat (nsec): min=6619, max=57685, avg=23830.98, stdev=6383.20 00:09:44.314 clat (usec): min=209, max=41391, avg=969.32, stdev=3652.64 00:09:44.314 lat (usec): min=234, max=41416, avg=993.15, stdev=3652.67 00:09:44.314 clat percentiles (usec): 00:09:44.314 | 1.00th=[ 334], 5.00th=[ 416], 10.00th=[ 482], 20.00th=[ 529], 00:09:44.314 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:09:44.314 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 832], 00:09:44.314 | 99.00th=[ 996], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:44.314 | 99.99th=[41157] 00:09:44.314 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:09:44.314 slat (nsec): min=9073, max=50258, avg=27802.03, stdev=8784.77 00:09:44.314 clat (usec): min=114, max=760, avg=371.21, stdev=107.88 00:09:44.314 lat (usec): min=125, max=792, avg=399.01, stdev=108.98 00:09:44.314 clat percentiles (usec): 00:09:44.314 | 1.00th=[ 165], 5.00th=[ 196], 10.00th=[ 265], 20.00th=[ 281], 00:09:44.314 | 30.00th=[ 297], 40.00th=[ 322], 50.00th=[ 363], 60.00th=[ 396], 00:09:44.314 | 70.00th=[ 424], 80.00th=[ 465], 90.00th=[ 523], 95.00th=[ 562], 00:09:44.314 | 99.00th=[ 635], 99.50th=[ 668], 99.90th=[ 676], 99.95th=[ 758], 00:09:44.314 | 99.99th=[ 758] 00:09:44.314 bw ( KiB/s): min= 4087, max= 4096, per=41.55%, avg=4091.50, stdev= 6.36, samples=2 00:09:44.314 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:09:44.314 lat (usec) : 250=5.14%, 500=53.95%, 750=33.99%, 1000=6.55% 00:09:44.314 lat (msec) : 2=0.06%, 50=0.31% 00:09:44.314 cpu : usr=2.34%, sys=4.20%, ctx=1633, majf=0, minf=1 00:09:44.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.314 issued rwts: total=609,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.314 job2: (groupid=0, jobs=1): err= 0: pid=3115919: Wed Nov 6 10:51:35 2024 00:09:44.314 read: IOPS=89, BW=360KiB/s (368kB/s)(360KiB/1001msec) 00:09:44.314 slat (nsec): min=9556, max=46215, avg=27448.52, stdev=3705.81 00:09:44.314 clat (usec): min=908, max=42056, avg=7424.62, stdev=14775.51 00:09:44.314 lat (usec): min=935, max=42083, avg=7452.07, stdev=14775.23 00:09:44.314 clat percentiles (usec): 00:09:44.314 | 1.00th=[ 906], 5.00th=[ 971], 10.00th=[ 996], 20.00th=[ 1106], 00:09:44.314 | 30.00th=[ 1123], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:09:44.314 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[41681], 95.00th=[41681], 00:09:44.314 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.314 | 99.99th=[42206] 00:09:44.314 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:44.314 slat (nsec): min=9002, max=62210, avg=28728.36, stdev=10035.07 00:09:44.314 clat (usec): min=231, max=899, avg=606.99, stdev=115.19 00:09:44.314 lat (usec): min=243, max=932, avg=635.72, stdev=120.24 00:09:44.314 clat percentiles (usec): 00:09:44.314 | 1.00th=[ 293], 5.00th=[ 392], 10.00th=[ 441], 20.00th=[ 515], 00:09:44.314 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:09:44.314 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:09:44.314 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 898], 99.95th=[ 898], 00:09:44.314 | 99.99th=[ 898] 00:09:44.314 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.314 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.314 lat (usec) : 250=0.33%, 500=15.12%, 750=61.63%, 1000=9.47% 00:09:44.314 lat (msec) : 2=11.13%, 50=2.33% 00:09:44.314 cpu : usr=1.10%, sys=2.30%, ctx=602, majf=0, minf=1 00:09:44.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.315 issued rwts: total=90,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.315 job3: (groupid=0, jobs=1): err= 0: pid=3115926: Wed Nov 6 10:51:35 2024 00:09:44.315 read: IOPS=54, BW=217KiB/s (223kB/s)(220KiB/1012msec) 00:09:44.315 slat (nsec): min=27524, max=45844, avg=28230.42, stdev=2437.20 00:09:44.315 clat (usec): min=831, max=42020, avg=12146.96, stdev=18192.85 00:09:44.315 lat (usec): min=860, max=42048, avg=12175.19, stdev=18192.70 00:09:44.315 clat percentiles (usec): 00:09:44.315 | 1.00th=[ 832], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1090], 00:09:44.315 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:09:44.315 | 70.00th=[ 1172], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:44.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.315 | 99.99th=[42206] 00:09:44.315 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:09:44.315 slat (nsec): min=9492, max=66757, avg=31255.24, stdev=9823.23 00:09:44.315 clat (usec): min=241, max=1087, avg=629.10, stdev=116.65 00:09:44.315 lat (usec): min=251, max=1122, avg=660.35, stdev=120.76 00:09:44.315 clat percentiles (usec): 00:09:44.315 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 469], 20.00th=[ 545], 00:09:44.315 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[ 668], 00:09:44.315 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 816], 00:09:44.315 | 99.00th=[ 865], 99.50th=[ 955], 99.90th=[ 1090], 99.95th=[ 1090], 00:09:44.315 | 99.99th=[ 1090] 00:09:44.315 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.315 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.315 lat (usec) : 250=0.18%, 500=11.64%, 750=67.20%, 1000=11.46% 00:09:44.315 lat (msec) : 2=6.88%, 50=2.65% 00:09:44.315 cpu : usr=0.59%, sys=2.77%, ctx=570, majf=0, minf=1 00:09:44.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.315 issued rwts: total=55,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.315 00:09:44.315 Run status group 0 (all jobs): 00:09:44.315 READ: bw=2969KiB/s (3040kB/s), 69.2KiB/s-2374KiB/s (70.9kB/s-2431kB/s), io=3088KiB (3162kB), run=1001-1040msec 00:09:44.315 WRITE: bw=9846KiB/s (10.1MB/s), 1969KiB/s-3992KiB/s (2016kB/s-4088kB/s), io=10.0MiB (10.5MB), run=1001-1040msec 00:09:44.315 00:09:44.315 Disk stats (read/write): 00:09:44.315 nvme0n1: ios=63/512, merge=0/0, ticks=568/248, in_queue=816, util=88.28% 00:09:44.315 nvme0n2: ios=653/1024, merge=0/0, ticks=515/370, in_queue=885, util=93.48% 00:09:44.315 nvme0n3: ios=72/512, merge=0/0, ticks=586/237, in_queue=823, util=92.19% 00:09:44.315 nvme0n4: ios=69/512, merge=0/0, ticks=719/266, in_queue=985, util=97.12% 00:09:44.315 10:51:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:44.315 [global] 00:09:44.315 thread=1 00:09:44.315 invalidate=1 00:09:44.315 rw=write 00:09:44.315 time_based=1 00:09:44.315 runtime=1 00:09:44.315 ioengine=libaio 00:09:44.315 direct=1 00:09:44.315 bs=4096 00:09:44.315 iodepth=128 00:09:44.315 norandommap=0 00:09:44.315 numjobs=1 00:09:44.315 00:09:44.315 verify_dump=1 00:09:44.315 verify_backlog=512 00:09:44.315 verify_state_save=0 00:09:44.315 do_verify=1 00:09:44.315 verify=crc32c-intel 00:09:44.315 [job0] 00:09:44.315 filename=/dev/nvme0n1 00:09:44.315 [job1] 00:09:44.315 filename=/dev/nvme0n2 00:09:44.315 [job2] 00:09:44.315 filename=/dev/nvme0n3 00:09:44.315 [job3] 00:09:44.315 filename=/dev/nvme0n4 00:09:44.315 Could not set queue depth (nvme0n1) 00:09:44.315 Could not set queue depth (nvme0n2) 00:09:44.315 Could not set queue depth (nvme0n3) 00:09:44.315 Could not set queue depth (nvme0n4) 00:09:44.576 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.576 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.576 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.576 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.576 fio-3.35 00:09:44.576 Starting 4 threads 00:09:45.959 00:09:45.959 job0: (groupid=0, jobs=1): err= 0: pid=3116356: Wed Nov 6 10:51:37 2024 00:09:45.959 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:09:45.959 slat (nsec): min=898, max=10620k, avg=101865.68, stdev=687885.45 00:09:45.959 clat (usec): min=5210, max=43736, avg=11855.07, stdev=5618.91 00:09:45.959 lat (usec): min=5216, max=43747, avg=11956.93, stdev=5688.13 00:09:45.959 clat percentiles (usec): 00:09:45.959 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 8586], 00:09:45.959 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[11076], 00:09:45.959 | 70.00th=[11731], 80.00th=[12780], 90.00th=[17171], 95.00th=[24773], 00:09:45.959 | 99.00th=[36963], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:09:45.959 | 99.99th=[43779] 00:09:45.959 write: IOPS=4395, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1008msec); 0 zone resets 00:09:45.959 slat (nsec): min=1601, max=8801.2k, avg=123604.88, stdev=601235.55 00:09:45.959 clat (usec): min=1293, max=43708, avg=17895.95, stdev=11904.31 00:09:45.959 lat (usec): min=1324, max=43711, avg=18019.56, stdev=11987.66 00:09:45.959 clat percentiles (usec): 00:09:45.959 | 1.00th=[ 4948], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6915], 00:09:45.959 | 30.00th=[ 8160], 40.00th=[ 9110], 50.00th=[11469], 60.00th=[18482], 00:09:45.959 | 70.00th=[26084], 80.00th=[31065], 90.00th=[36439], 95.00th=[39060], 00:09:45.959 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:09:45.959 | 99.99th=[43779] 00:09:45.959 bw ( KiB/s): min=14416, max=20016, per=20.34%, avg=17216.00, stdev=3959.80, samples=2 00:09:45.959 iops : min= 3604, max= 5004, avg=4304.00, stdev=989.95, samples=2 00:09:45.959 lat (msec) : 2=0.02%, 4=0.43%, 10=42.50%, 20=33.19%, 50=23.85% 00:09:45.959 cpu : usr=3.28%, sys=4.77%, ctx=370, majf=0, minf=1 00:09:45.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:45.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.959 issued rwts: total=4096,4431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.959 job1: (groupid=0, jobs=1): err= 0: pid=3116385: Wed Nov 6 10:51:37 2024 00:09:45.959 read: IOPS=2937, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1004msec) 00:09:45.959 slat (nsec): min=934, max=11954k, avg=121727.51, stdev=710418.28 00:09:45.960 clat (usec): min=3422, max=34155, avg=15616.73, stdev=4833.28 00:09:45.960 lat (usec): min=3428, max=37011, avg=15738.45, stdev=4893.49 00:09:45.960 clat percentiles (usec): 00:09:45.960 | 1.00th=[ 4686], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[11863], 00:09:45.960 | 30.00th=[12387], 40.00th=[13042], 50.00th=[14353], 60.00th=[15401], 00:09:45.960 | 70.00th=[17171], 80.00th=[19530], 90.00th=[22414], 95.00th=[26608], 00:09:45.960 | 99.00th=[30016], 99.50th=[31327], 99.90th=[34341], 99.95th=[34341], 00:09:45.960 | 99.99th=[34341] 00:09:45.960 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:45.960 slat (nsec): min=1638, max=13168k, avg=202348.65, stdev=970081.59 00:09:45.960 clat (usec): min=6450, max=74324, avg=26305.49, stdev=14693.00 00:09:45.960 lat (usec): min=6460, max=74332, avg=26507.83, stdev=14793.63 00:09:45.960 clat percentiles (usec): 00:09:45.960 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[13829], 00:09:45.960 | 30.00th=[17957], 40.00th=[18482], 50.00th=[21890], 60.00th=[26084], 00:09:45.960 | 70.00th=[29492], 80.00th=[36439], 90.00th=[47973], 95.00th=[58983], 00:09:45.960 | 99.00th=[70779], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:09:45.960 | 99.99th=[73925] 00:09:45.960 bw ( KiB/s): min=11912, max=12664, per=14.52%, avg=12288.00, stdev=531.74, samples=2 00:09:45.960 iops : min= 2978, max= 3166, avg=3072.00, stdev=132.94, samples=2 00:09:45.960 lat (msec) : 4=0.18%, 10=5.25%, 20=57.23%, 50=32.77%, 100=4.57% 00:09:45.960 cpu : usr=2.99%, sys=3.69%, ctx=313, majf=0, minf=1 00:09:45.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:45.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.960 issued rwts: total=2949,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.960 job2: (groupid=0, jobs=1): err= 0: pid=3116425: Wed Nov 6 10:51:37 2024 00:09:45.960 read: IOPS=9162, BW=35.8MiB/s (37.5MB/s)(36.0MiB/1007msec) 00:09:45.960 slat (nsec): min=1026, max=9299.3k, avg=55126.20, stdev=413741.77 00:09:45.960 clat (usec): min=1983, max=26129, avg=7521.78, stdev=2570.62 00:09:45.960 lat (usec): min=2138, max=26162, avg=7576.90, stdev=2594.92 00:09:45.960 clat percentiles (usec): 00:09:45.960 | 1.00th=[ 3326], 5.00th=[ 5080], 10.00th=[ 5538], 20.00th=[ 5932], 00:09:45.960 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7242], 00:09:45.960 | 70.00th=[ 7832], 80.00th=[ 8455], 90.00th=[10028], 95.00th=[11338], 00:09:45.960 | 99.00th=[20055], 99.50th=[22152], 99.90th=[24773], 99.95th=[25822], 00:09:45.960 | 99.99th=[26084] 00:09:45.960 write: IOPS=9660, BW=37.7MiB/s (39.6MB/s)(38.0MiB/1007msec); 0 zone resets 00:09:45.960 slat (nsec): min=1673, max=5412.1k, avg=45715.75, stdev=322004.88 00:09:45.960 clat (usec): min=1556, max=16106, avg=5981.68, stdev=1543.83 00:09:45.960 lat (usec): min=1565, max=16109, avg=6027.40, stdev=1558.48 00:09:45.960 clat percentiles (usec): 00:09:45.960 | 1.00th=[ 2409], 5.00th=[ 3621], 10.00th=[ 3949], 20.00th=[ 4490], 00:09:45.960 | 30.00th=[ 5473], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:09:45.960 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7767], 95.00th=[ 8717], 00:09:45.960 | 99.00th=[10683], 99.50th=[11731], 99.90th=[12125], 99.95th=[16057], 00:09:45.960 | 99.99th=[16057] 00:09:45.960 bw ( KiB/s): min=36937, max=40024, per=45.47%, avg=38480.50, stdev=2182.84, samples=2 00:09:45.960 iops : min= 9234, max=10006, avg=9620.00, stdev=545.89, samples=2 00:09:45.960 lat (msec) : 2=0.25%, 4=6.48%, 10=87.68%, 20=5.09%, 50=0.50% 00:09:45.960 cpu : usr=6.86%, sys=10.24%, ctx=636, majf=0, minf=1 00:09:45.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:45.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.960 issued rwts: total=9227,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.960 job3: (groupid=0, jobs=1): err= 0: pid=3116439: Wed Nov 6 10:51:37 2024 00:09:45.960 read: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1004msec) 00:09:45.960 slat (nsec): min=998, max=13973k, avg=108728.61, stdev=710630.28 00:09:45.960 clat (usec): min=1539, max=35946, avg=14494.57, stdev=4650.34 00:09:45.960 lat (usec): min=4913, max=36997, avg=14603.30, stdev=4710.36 00:09:45.960 clat percentiles (usec): 00:09:45.960 | 1.00th=[ 5014], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10814], 00:09:45.960 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13566], 60.00th=[14484], 00:09:45.960 | 70.00th=[15533], 80.00th=[17695], 90.00th=[20055], 95.00th=[22676], 00:09:45.960 | 99.00th=[33162], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:09:45.960 | 99.99th=[35914] 00:09:45.960 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:45.960 slat (nsec): min=1791, max=15888k, avg=138415.39, stdev=825808.65 00:09:45.960 clat (usec): min=4581, max=60312, avg=17593.78, stdev=10746.00 00:09:45.960 lat (usec): min=7347, max=60322, avg=17732.20, stdev=10825.33 00:09:45.960 clat percentiles (usec): 00:09:45.960 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9634], 00:09:45.960 | 30.00th=[10552], 40.00th=[11338], 50.00th=[14222], 60.00th=[17957], 00:09:45.960 | 70.00th=[19268], 80.00th=[22414], 90.00th=[30540], 95.00th=[45876], 00:09:45.960 | 99.00th=[57934], 99.50th=[58983], 99.90th=[60031], 99.95th=[60031], 00:09:45.960 | 99.99th=[60556] 00:09:45.960 bw ( KiB/s): min=16384, max=16384, per=19.36%, avg=16384.00, stdev= 0.00, samples=2 00:09:45.960 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:45.960 lat (msec) : 2=0.01%, 10=14.93%, 20=64.81%, 50=18.30%, 100=1.94% 00:09:45.960 cpu : usr=3.29%, sys=4.69%, ctx=299, majf=0, minf=1 00:09:45.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:45.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.960 issued rwts: total=3782,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.960 00:09:45.960 Run status group 0 (all jobs): 00:09:45.960 READ: bw=77.7MiB/s (81.5MB/s), 11.5MiB/s-35.8MiB/s (12.0MB/s-37.5MB/s), io=78.3MiB (82.1MB), run=1004-1008msec 00:09:45.960 WRITE: bw=82.6MiB/s (86.7MB/s), 12.0MiB/s-37.7MiB/s (12.5MB/s-39.6MB/s), io=83.3MiB (87.4MB), run=1004-1008msec 00:09:45.960 00:09:45.960 Disk stats (read/write): 00:09:45.960 nvme0n1: ios=3122/3548, merge=0/0, ticks=34953/58193, in_queue=93146, util=89.08% 00:09:45.960 nvme0n2: ios=2068/2223, merge=0/0, ticks=17714/28713, in_queue=46427, util=97.09% 00:09:45.960 nvme0n3: ios=7093/7168, merge=0/0, ticks=51145/39993, in_queue=91138, util=100.00% 00:09:45.960 nvme0n4: ios=3097/3072, merge=0/0, ticks=23581/23321, in_queue=46902, util=98.50% 00:09:45.960 10:51:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:45.960 [global] 00:09:45.960 thread=1 00:09:45.960 invalidate=1 00:09:45.960 rw=randwrite 00:09:45.960 time_based=1 00:09:45.960 runtime=1 00:09:45.960 ioengine=libaio 00:09:45.960 direct=1 00:09:45.960 bs=4096 00:09:45.960 iodepth=128 00:09:45.960 norandommap=0 00:09:45.960 numjobs=1 00:09:45.960 00:09:45.960 verify_dump=1 00:09:45.960 verify_backlog=512 00:09:45.960 verify_state_save=0 00:09:45.960 do_verify=1 00:09:45.960 verify=crc32c-intel 00:09:45.960 [job0] 00:09:45.960 filename=/dev/nvme0n1 00:09:45.960 [job1] 00:09:45.960 filename=/dev/nvme0n2 00:09:45.960 [job2] 00:09:45.960 filename=/dev/nvme0n3 00:09:45.960 [job3] 00:09:45.960 filename=/dev/nvme0n4 00:09:45.960 Could not set queue depth (nvme0n1) 00:09:45.960 Could not set queue depth (nvme0n2) 00:09:45.960 Could not set queue depth (nvme0n3) 00:09:45.960 Could not set queue depth (nvme0n4) 00:09:46.220 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.220 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.220 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.220 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.220 fio-3.35 00:09:46.220 Starting 4 threads 00:09:47.603 00:09:47.603 job0: (groupid=0, jobs=1): err= 0: pid=3116945: Wed Nov 6 10:51:38 2024 00:09:47.603 read: IOPS=5886, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1003msec) 00:09:47.603 slat (nsec): min=1019, max=18021k, avg=83838.79, stdev=677570.64 00:09:47.603 clat (usec): min=1881, max=42462, avg=11241.74, stdev=6064.66 00:09:47.603 lat (usec): min=1888, max=42492, avg=11325.58, stdev=6107.82 00:09:47.603 clat percentiles (usec): 00:09:47.603 | 1.00th=[ 3752], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 7111], 00:09:47.603 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[11076], 00:09:47.603 | 70.00th=[12780], 80.00th=[14615], 90.00th=[19268], 95.00th=[23987], 00:09:47.603 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[36439], 00:09:47.603 | 99.99th=[42206] 00:09:47.603 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:47.603 slat (nsec): min=1647, max=13530k, avg=74903.17, stdev=577363.79 00:09:47.603 clat (usec): min=704, max=38951, avg=9886.13, stdev=4338.90 00:09:47.603 lat (usec): min=714, max=38974, avg=9961.04, stdev=4395.47 00:09:47.603 clat percentiles (usec): 00:09:47.603 | 1.00th=[ 3163], 5.00th=[ 5014], 10.00th=[ 6456], 20.00th=[ 7111], 00:09:47.603 | 30.00th=[ 7242], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9765], 00:09:47.603 | 70.00th=[11207], 80.00th=[12387], 90.00th=[13698], 95.00th=[17695], 00:09:47.603 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[34866], 00:09:47.603 | 99.99th=[39060] 00:09:47.603 bw ( KiB/s): min=24064, max=25088, per=30.06%, avg=24576.00, stdev=724.08, samples=2 00:09:47.603 iops : min= 6016, max= 6272, avg=6144.00, stdev=181.02, samples=2 00:09:47.603 lat (usec) : 750=0.04%, 1000=0.01% 00:09:47.603 lat (msec) : 2=0.16%, 4=1.19%, 10=58.21%, 20=34.46%, 50=5.93% 00:09:47.603 cpu : usr=5.09%, sys=6.89%, ctx=345, majf=0, minf=1 00:09:47.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:47.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.603 issued rwts: total=5904,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.603 job1: (groupid=0, jobs=1): err= 0: pid=3116959: Wed Nov 6 10:51:38 2024 00:09:47.603 read: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1006msec) 00:09:47.603 slat (nsec): min=949, max=8966.0k, avg=107394.14, stdev=623426.34 00:09:47.603 clat (usec): min=1440, max=26209, avg=13447.46, stdev=4218.30 00:09:47.603 lat (usec): min=2950, max=26240, avg=13554.85, stdev=4259.51 00:09:47.603 clat percentiles (usec): 00:09:47.603 | 1.00th=[ 4113], 5.00th=[ 5276], 10.00th=[ 9241], 20.00th=[11076], 00:09:47.603 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12780], 60.00th=[13173], 00:09:47.603 | 70.00th=[14353], 80.00th=[16909], 90.00th=[19530], 95.00th=[21627], 00:09:47.603 | 99.00th=[22938], 99.50th=[24511], 99.90th=[26084], 99.95th=[26084], 00:09:47.603 | 99.99th=[26084] 00:09:47.603 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:47.603 slat (nsec): min=1597, max=9480.6k, avg=114112.81, stdev=586235.00 00:09:47.603 clat (usec): min=1284, max=53638, avg=15377.84, stdev=7982.93 00:09:47.603 lat (usec): min=1294, max=53641, avg=15491.96, stdev=8035.78 00:09:47.603 clat percentiles (usec): 00:09:47.603 | 1.00th=[ 3425], 5.00th=[ 6390], 10.00th=[ 9110], 20.00th=[10028], 00:09:47.603 | 30.00th=[10290], 40.00th=[11600], 50.00th=[12649], 60.00th=[15533], 00:09:47.603 | 70.00th=[17695], 80.00th=[20055], 90.00th=[24249], 95.00th=[27919], 00:09:47.603 | 99.00th=[50594], 99.50th=[51119], 99.90th=[53740], 99.95th=[53740], 00:09:47.603 | 99.99th=[53740] 00:09:47.603 bw ( KiB/s): min=16256, max=20528, per=22.50%, avg=18392.00, stdev=3020.76, samples=2 00:09:47.603 iops : min= 4064, max= 5132, avg=4598.00, stdev=755.19, samples=2 00:09:47.603 lat (msec) : 2=0.11%, 4=1.27%, 10=13.35%, 20=70.38%, 50=14.19% 00:09:47.603 lat (msec) : 100=0.69% 00:09:47.603 cpu : usr=3.28%, sys=5.27%, ctx=464, majf=0, minf=1 00:09:47.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:47.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.603 issued rwts: total=4214,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.603 job2: (groupid=0, jobs=1): err= 0: pid=3116978: Wed Nov 6 10:51:38 2024 00:09:47.603 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:09:47.603 slat (nsec): min=1025, max=11298k, avg=74537.48, stdev=559413.64 00:09:47.603 clat (usec): min=2923, max=28664, avg=9963.06, stdev=3670.52 00:09:47.603 lat (usec): min=2933, max=28672, avg=10037.60, stdev=3718.01 00:09:47.603 clat percentiles (usec): 00:09:47.603 | 1.00th=[ 4113], 5.00th=[ 5342], 10.00th=[ 6718], 20.00th=[ 7373], 00:09:47.603 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9765], 00:09:47.603 | 70.00th=[10814], 80.00th=[12518], 90.00th=[14484], 95.00th=[16909], 00:09:47.603 | 99.00th=[23462], 99.50th=[25822], 99.90th=[27919], 99.95th=[28705], 00:09:47.603 | 99.99th=[28705] 00:09:47.603 write: IOPS=6982, BW=27.3MiB/s (28.6MB/s)(27.4MiB/1006msec); 0 zone resets 00:09:47.603 slat (nsec): min=1623, max=10802k, avg=52069.77, stdev=414548.44 00:09:47.603 clat (usec): min=513, max=48499, avg=8738.54, stdev=5813.49 00:09:47.603 lat (usec): min=605, max=48509, avg=8790.61, stdev=5838.58 00:09:47.603 clat percentiles (usec): 00:09:47.603 | 1.00th=[ 1254], 5.00th=[ 2868], 10.00th=[ 3785], 20.00th=[ 4621], 00:09:47.603 | 30.00th=[ 5342], 40.00th=[ 6128], 50.00th=[ 7111], 60.00th=[ 8160], 00:09:47.603 | 70.00th=[10028], 80.00th=[11863], 90.00th=[14877], 95.00th=[21627], 00:09:47.603 | 99.00th=[29754], 99.50th=[32113], 99.90th=[44303], 99.95th=[45351], 00:09:47.603 | 99.99th=[48497] 00:09:47.603 bw ( KiB/s): min=24576, max=30592, per=33.74%, avg=27584.00, stdev=4253.95, samples=2 00:09:47.603 iops : min= 6144, max= 7648, avg=6896.00, stdev=1063.49, samples=2 00:09:47.603 lat (usec) : 750=0.07%, 1000=0.26% 00:09:47.603 lat (msec) : 2=1.18%, 4=5.56%, 10=59.01%, 20=29.93%, 50=4.00% 00:09:47.603 cpu : usr=6.57%, sys=7.56%, ctx=434, majf=0, minf=1 00:09:47.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:47.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.603 issued rwts: total=6656,7024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.603 job3: (groupid=0, jobs=1): err= 0: pid=3116985: Wed Nov 6 10:51:38 2024 00:09:47.603 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:47.603 slat (nsec): min=1006, max=19337k, avg=208507.28, stdev=1213868.94 00:09:47.603 clat (usec): min=12309, max=50875, avg=25994.46, stdev=8594.55 00:09:47.604 lat (usec): min=15268, max=50880, avg=26202.96, stdev=8585.33 00:09:47.604 clat percentiles (usec): 00:09:47.604 | 1.00th=[15270], 5.00th=[16909], 10.00th=[17957], 20.00th=[19006], 00:09:47.604 | 30.00th=[19530], 40.00th=[20317], 50.00th=[22938], 60.00th=[25822], 00:09:47.604 | 70.00th=[31327], 80.00th=[34866], 90.00th=[38011], 95.00th=[40633], 00:09:47.604 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:09:47.604 | 99.99th=[51119] 00:09:47.604 write: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1006msec); 0 zone resets 00:09:47.604 slat (nsec): min=1986, max=11822k, avg=159521.76, stdev=781356.39 00:09:47.604 clat (usec): min=598, max=53451, avg=21885.99, stdev=11389.32 00:09:47.604 lat (usec): min=609, max=53458, avg=22045.51, stdev=11442.96 00:09:47.604 clat percentiles (usec): 00:09:47.604 | 1.00th=[ 1188], 5.00th=[ 4883], 10.00th=[ 9634], 20.00th=[14877], 00:09:47.604 | 30.00th=[15664], 40.00th=[16581], 50.00th=[18220], 60.00th=[21365], 00:09:47.604 | 70.00th=[25822], 80.00th=[32637], 90.00th=[38011], 95.00th=[45351], 00:09:47.604 | 99.00th=[51119], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:09:47.604 | 99.99th=[53216] 00:09:47.604 bw ( KiB/s): min= 9984, max=11272, per=13.00%, avg=10628.00, stdev=910.75, samples=2 00:09:47.604 iops : min= 2496, max= 2818, avg=2657.00, stdev=227.69, samples=2 00:09:47.604 lat (usec) : 750=0.06%, 1000=0.19% 00:09:47.604 lat (msec) : 2=0.77%, 4=1.23%, 10=3.11%, 20=41.55%, 50=50.96% 00:09:47.604 lat (msec) : 100=2.13% 00:09:47.604 cpu : usr=2.49%, sys=3.68%, ctx=264, majf=0, minf=2 00:09:47.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:47.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.604 issued rwts: total=2560,2785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.604 00:09:47.604 Run status group 0 (all jobs): 00:09:47.604 READ: bw=75.1MiB/s (78.7MB/s), 9.94MiB/s-25.8MiB/s (10.4MB/s-27.1MB/s), io=75.5MiB (79.2MB), run=1003-1006msec 00:09:47.604 WRITE: bw=79.8MiB/s (83.7MB/s), 10.8MiB/s-27.3MiB/s (11.3MB/s-28.6MB/s), io=80.3MiB (84.2MB), run=1003-1006msec 00:09:47.604 00:09:47.604 Disk stats (read/write): 00:09:47.604 nvme0n1: ios=4656/4754, merge=0/0, ticks=38143/33089, in_queue=71232, util=85.47% 00:09:47.604 nvme0n2: ios=3889/4096, merge=0/0, ticks=25009/27158, in_queue=52167, util=88.69% 00:09:47.604 nvme0n3: ios=5179/5597, merge=0/0, ticks=45069/46686, in_queue=91755, util=92.93% 00:09:47.604 nvme0n4: ios=2111/2496, merge=0/0, ticks=13814/15360, in_queue=29174, util=97.33% 00:09:47.604 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:47.604 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3117057 00:09:47.604 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:47.604 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:47.604 [global] 00:09:47.604 thread=1 00:09:47.604 invalidate=1 00:09:47.604 rw=read 00:09:47.604 time_based=1 00:09:47.604 runtime=10 00:09:47.604 ioengine=libaio 00:09:47.604 direct=1 00:09:47.604 bs=4096 00:09:47.604 iodepth=1 00:09:47.604 norandommap=1 00:09:47.604 numjobs=1 00:09:47.604 00:09:47.604 [job0] 00:09:47.604 filename=/dev/nvme0n1 00:09:47.604 [job1] 00:09:47.604 filename=/dev/nvme0n2 00:09:47.604 [job2] 00:09:47.604 filename=/dev/nvme0n3 00:09:47.604 [job3] 00:09:47.604 filename=/dev/nvme0n4 00:09:47.604 Could not set queue depth (nvme0n1) 00:09:47.604 Could not set queue depth (nvme0n2) 00:09:47.604 Could not set queue depth (nvme0n3) 00:09:47.604 Could not set queue depth (nvme0n4) 00:09:47.863 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.863 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.863 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.863 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.863 fio-3.35 00:09:47.863 Starting 4 threads 00:09:50.404 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:50.665 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7532544, buflen=4096 00:09:50.665 fio: pid=3117507, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.665 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:50.926 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10924032, buflen=4096 00:09:50.926 fio: pid=3117500, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.926 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.926 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:51.188 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=385024, buflen=4096 00:09:51.188 fio: pid=3117462, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.188 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.188 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:51.188 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6426624, buflen=4096 00:09:51.188 fio: pid=3117480, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.188 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.188 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:51.188 00:09:51.188 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3117462: Wed Nov 6 10:51:42 2024 00:09:51.188 read: IOPS=31, BW=127KiB/s (130kB/s)(376KiB/2970msec) 00:09:51.188 slat (usec): min=8, max=13883, avg=171.81, stdev=1421.77 00:09:51.188 clat (usec): min=433, max=41919, avg=31184.21, stdev=17445.73 00:09:51.188 lat (usec): min=459, max=55245, avg=31357.57, stdev=17590.53 00:09:51.188 clat percentiles (usec): 00:09:51.188 | 1.00th=[ 433], 5.00th=[ 586], 10.00th=[ 660], 20.00th=[ 832], 00:09:51.188 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:51.188 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:51.188 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:51.188 | 99.99th=[41681] 00:09:51.188 bw ( KiB/s): min= 96, max= 192, per=1.67%, avg=131.20, stdev=37.78, samples=5 00:09:51.188 iops : min= 24, max= 48, avg=32.80, stdev= 9.44, samples=5 00:09:51.188 lat (usec) : 500=1.05%, 750=15.79%, 1000=7.37% 00:09:51.188 lat (msec) : 50=74.74% 00:09:51.188 cpu : usr=0.00%, sys=0.13%, ctx=96, majf=0, minf=1 00:09:51.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.188 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3117480: Wed Nov 6 10:51:42 2024 00:09:51.188 read: IOPS=498, BW=1994KiB/s (2042kB/s)(6276KiB/3147msec) 00:09:51.188 slat (usec): min=6, max=26885, avg=67.61, stdev=876.19 00:09:51.188 clat (usec): min=491, max=42065, avg=1917.03, stdev=5798.13 00:09:51.188 lat (usec): min=518, max=42092, avg=1984.67, stdev=5858.69 00:09:51.188 clat percentiles (usec): 00:09:51.188 | 1.00th=[ 717], 5.00th=[ 865], 10.00th=[ 938], 20.00th=[ 996], 00:09:51.188 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:09:51.188 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:09:51.188 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:51.188 | 99.99th=[42206] 00:09:51.188 bw ( KiB/s): min= 1150, max= 3664, per=26.00%, avg=2039.67, stdev=1094.64, samples=6 00:09:51.188 iops : min= 287, max= 916, avg=509.83, stdev=273.74, samples=6 00:09:51.188 lat (usec) : 500=0.06%, 750=1.08%, 1000=19.30% 00:09:51.188 lat (msec) : 2=77.32%, 10=0.06%, 50=2.10% 00:09:51.188 cpu : usr=0.99%, sys=1.97%, ctx=1576, majf=0, minf=2 00:09:51.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.188 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3117500: Wed Nov 6 10:51:42 2024 00:09:51.188 read: IOPS=959, BW=3835KiB/s (3927kB/s)(10.4MiB/2782msec) 00:09:51.188 slat (usec): min=7, max=17025, avg=37.02, stdev=437.51 00:09:51.188 clat (usec): min=354, max=1558, avg=991.70, stdev=123.49 00:09:51.188 lat (usec): min=369, max=18092, avg=1028.73, stdev=456.91 00:09:51.188 clat percentiles (usec): 00:09:51.188 | 1.00th=[ 627], 5.00th=[ 758], 10.00th=[ 832], 20.00th=[ 898], 00:09:51.188 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1029], 00:09:51.188 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:09:51.188 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1352], 99.95th=[ 1549], 00:09:51.188 | 99.99th=[ 1565] 00:09:51.188 bw ( KiB/s): min= 3688, max= 4184, per=49.73%, avg=3899.20, stdev=195.03, samples=5 00:09:51.188 iops : min= 922, max= 1046, avg=974.80, stdev=48.76, samples=5 00:09:51.188 lat (usec) : 500=0.19%, 750=3.82%, 1000=43.29% 00:09:51.188 lat (msec) : 2=52.66% 00:09:51.188 cpu : usr=1.08%, sys=2.80%, ctx=2670, majf=0, minf=1 00:09:51.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 issued rwts: total=2668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.188 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3117507: Wed Nov 6 10:51:42 2024 00:09:51.188 read: IOPS=707, BW=2828KiB/s (2896kB/s)(7356KiB/2601msec) 00:09:51.188 slat (nsec): min=6634, max=64352, avg=27361.88, stdev=3644.78 00:09:51.188 clat (usec): min=579, max=42053, avg=1366.32, stdev=3266.83 00:09:51.188 lat (usec): min=588, max=42080, avg=1393.68, stdev=3266.70 00:09:51.188 clat percentiles (usec): 00:09:51.188 | 1.00th=[ 742], 5.00th=[ 873], 10.00th=[ 971], 20.00th=[ 1045], 00:09:51.188 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:09:51.188 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:09:51.188 | 99.00th=[ 1418], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:51.188 | 99.99th=[42206] 00:09:51.188 bw ( KiB/s): min= 1816, max= 3520, per=36.23%, avg=2841.60, stdev=744.07, samples=5 00:09:51.188 iops : min= 454, max= 880, avg=710.40, stdev=186.02, samples=5 00:09:51.188 lat (usec) : 750=1.30%, 1000=12.55% 00:09:51.188 lat (msec) : 2=85.38%, 4=0.05%, 50=0.65% 00:09:51.188 cpu : usr=1.23%, sys=2.92%, ctx=1841, majf=0, minf=2 00:09:51.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.188 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.188 00:09:51.188 Run status group 0 (all jobs): 00:09:51.188 READ: bw=7841KiB/s (8029kB/s), 127KiB/s-3835KiB/s (130kB/s-3927kB/s), io=24.1MiB (25.3MB), run=2601-3147msec 00:09:51.188 00:09:51.188 Disk stats (read/write): 00:09:51.188 nvme0n1: ios=90/0, merge=0/0, ticks=2810/0, in_queue=2810, util=94.32% 00:09:51.188 nvme0n2: ios=1554/0, merge=0/0, ticks=2805/0, in_queue=2805, util=93.80% 00:09:51.188 nvme0n3: ios=2521/0, merge=0/0, ticks=2455/0, in_queue=2455, util=96.03% 00:09:51.188 nvme0n4: ios=1840/0, merge=0/0, ticks=2332/0, in_queue=2332, util=96.27% 00:09:51.450 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.450 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:51.710 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.710 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:51.710 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.710 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:51.971 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.971 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3117057 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:52.232 nvmf hotplug test: fio failed as expected 00:09:52.232 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.493 rmmod nvme_tcp 00:09:52.493 rmmod nvme_fabrics 00:09:52.493 rmmod nvme_keyring 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3113482 ']' 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3113482 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3113482 ']' 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3113482 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3113482 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3113482' 00:09:52.493 killing process with pid 3113482 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3113482 00:09:52.493 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3113482 00:09:52.754 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.754 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.754 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.755 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.298 00:09:55.298 real 0m29.337s 00:09:55.298 user 2m32.261s 00:09:55.298 sys 0m9.370s 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.298 ************************************ 00:09:55.298 END TEST nvmf_fio_target 00:09:55.298 ************************************ 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.298 ************************************ 00:09:55.298 START TEST nvmf_bdevio 00:09:55.298 ************************************ 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:55.298 * Looking for test storage... 00:09:55.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.298 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.299 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.437 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.437 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:03.438 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:03.438 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:03.438 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:03.438 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:10:03.438 00:10:03.438 --- 10.0.0.2 ping statistics --- 00:10:03.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.438 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:10:03.438 00:10:03.438 --- 10.0.0.1 ping statistics --- 00:10:03.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.438 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.438 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3122565 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3122565 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3122565 ']' 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.439 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 [2024-11-06 10:51:53.822640] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:10:03.439 [2024-11-06 10:51:53.822689] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.439 [2024-11-06 10:51:53.915818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.439 [2024-11-06 10:51:53.951310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.439 [2024-11-06 10:51:53.951347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.439 [2024-11-06 10:51:53.951356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.439 [2024-11-06 10:51:53.951362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.439 [2024-11-06 10:51:53.951368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.439 [2024-11-06 10:51:53.952888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.439 [2024-11-06 10:51:53.953122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:03.439 [2024-11-06 10:51:53.953241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.439 [2024-11-06 10:51:53.953241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 [2024-11-06 10:51:54.077218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 Malloc0 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.439 [2024-11-06 10:51:54.158039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.439 { 00:10:03.439 "params": { 00:10:03.439 "name": "Nvme$subsystem", 00:10:03.439 "trtype": "$TEST_TRANSPORT", 00:10:03.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.439 "adrfam": "ipv4", 00:10:03.439 "trsvcid": "$NVMF_PORT", 00:10:03.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.439 "hdgst": ${hdgst:-false}, 00:10:03.439 "ddgst": ${ddgst:-false} 00:10:03.439 }, 00:10:03.439 "method": "bdev_nvme_attach_controller" 00:10:03.439 } 00:10:03.439 EOF 00:10:03.439 )") 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:03.439 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.439 "params": { 00:10:03.439 "name": "Nvme1", 00:10:03.439 "trtype": "tcp", 00:10:03.439 "traddr": "10.0.0.2", 00:10:03.439 "adrfam": "ipv4", 00:10:03.439 "trsvcid": "4420", 00:10:03.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.439 "hdgst": false, 00:10:03.439 "ddgst": false 00:10:03.439 }, 00:10:03.439 "method": "bdev_nvme_attach_controller" 00:10:03.439 }' 00:10:03.439 [2024-11-06 10:51:54.221696] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:10:03.439 [2024-11-06 10:51:54.221791] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122588 ] 00:10:03.439 [2024-11-06 10:51:54.302092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:03.439 [2024-11-06 10:51:54.346348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.439 [2024-11-06 10:51:54.346467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.439 [2024-11-06 10:51:54.346470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.439 I/O targets: 00:10:03.439 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:03.439 00:10:03.439 00:10:03.439 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.439 http://cunit.sourceforge.net/ 00:10:03.439 00:10:03.439 00:10:03.439 Suite: bdevio tests on: Nvme1n1 00:10:03.439 Test: blockdev write read block ...passed 00:10:03.439 Test: blockdev write zeroes read block ...passed 00:10:03.439 Test: blockdev write zeroes read no split ...passed 00:10:03.439 Test: blockdev write zeroes read split ...passed 00:10:03.439 Test: blockdev write zeroes read split partial ...passed 00:10:03.439 Test: blockdev reset ...[2024-11-06 10:51:54.700145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:03.439 [2024-11-06 10:51:54.700209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21970 (9): Bad file descriptor 00:10:03.439 [2024-11-06 10:51:54.717930] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:03.439 passed 00:10:03.439 Test: blockdev write read 8 blocks ...passed 00:10:03.439 Test: blockdev write read size > 128k ...passed 00:10:03.439 Test: blockdev write read invalid size ...passed 00:10:03.439 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:03.439 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:03.439 Test: blockdev write read max offset ...passed 00:10:03.439 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:03.439 Test: blockdev writev readv 8 blocks ...passed 00:10:03.439 Test: blockdev writev readv 30 x 1block ...passed 00:10:03.702 Test: blockdev writev readv block ...passed 00:10:03.702 Test: blockdev writev readv size > 128k ...passed 00:10:03.702 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:03.702 Test: blockdev comparev and writev ...[2024-11-06 10:51:54.901558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.901588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.901600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.901606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.902120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.902129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.902138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.902144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.902638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.902646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.902655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.902661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.903120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.903128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.903138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.702 [2024-11-06 10:51:54.903143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:03.702 passed 00:10:03.702 Test: blockdev nvme passthru rw ...passed 00:10:03.702 Test: blockdev nvme passthru vendor specific ...[2024-11-06 10:51:54.987607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.702 [2024-11-06 10:51:54.987620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.987953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.702 [2024-11-06 10:51:54.987961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.988184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.702 [2024-11-06 10:51:54.988192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:03.702 [2024-11-06 10:51:54.988408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.702 [2024-11-06 10:51:54.988415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:03.702 passed 00:10:03.702 Test: blockdev nvme admin passthru ...passed 00:10:03.702 Test: blockdev copy ...passed 00:10:03.702 00:10:03.702 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.702 suites 1 1 n/a 0 0 00:10:03.702 tests 23 23 23 0 0 00:10:03.702 asserts 152 152 152 0 n/a 00:10:03.702 00:10:03.702 Elapsed time = 1.118 seconds 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.991 rmmod nvme_tcp 00:10:03.991 rmmod nvme_fabrics 00:10:03.991 rmmod nvme_keyring 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3122565 ']' 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3122565 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3122565 ']' 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3122565 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3122565 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3122565' 00:10:03.991 killing process with pid 3122565 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3122565 00:10:03.991 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3122565 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.253 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.167 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.167 00:10:06.167 real 0m11.381s 00:10:06.167 user 0m9.978s 00:10:06.167 sys 0m6.057s 00:10:06.167 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.167 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.167 ************************************ 00:10:06.167 END TEST nvmf_bdevio 00:10:06.167 ************************************ 00:10:06.428 10:51:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:06.428 00:10:06.428 real 5m0.192s 00:10:06.428 user 11m32.087s 00:10:06.428 sys 1m46.195s 00:10:06.428 10:51:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.428 10:51:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.428 ************************************ 00:10:06.428 END TEST nvmf_target_core 00:10:06.428 ************************************ 00:10:06.428 10:51:57 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.428 10:51:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.428 10:51:57 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.428 10:51:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.428 ************************************ 00:10:06.428 START TEST nvmf_target_extra 00:10:06.428 ************************************ 00:10:06.428 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.428 * Looking for test storage... 00:10:06.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:06.428 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.428 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.428 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.689 --rc genhtml_branch_coverage=1 00:10:06.689 --rc genhtml_function_coverage=1 00:10:06.689 --rc genhtml_legend=1 00:10:06.689 --rc geninfo_all_blocks=1 00:10:06.689 --rc geninfo_unexecuted_blocks=1 00:10:06.689 00:10:06.689 ' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.689 --rc genhtml_branch_coverage=1 00:10:06.689 --rc genhtml_function_coverage=1 00:10:06.689 --rc genhtml_legend=1 00:10:06.689 --rc geninfo_all_blocks=1 00:10:06.689 --rc geninfo_unexecuted_blocks=1 00:10:06.689 00:10:06.689 ' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.689 --rc genhtml_branch_coverage=1 00:10:06.689 --rc genhtml_function_coverage=1 00:10:06.689 --rc genhtml_legend=1 00:10:06.689 --rc geninfo_all_blocks=1 00:10:06.689 --rc geninfo_unexecuted_blocks=1 00:10:06.689 00:10:06.689 ' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.689 --rc genhtml_branch_coverage=1 00:10:06.689 --rc genhtml_function_coverage=1 00:10:06.689 --rc genhtml_legend=1 00:10:06.689 --rc geninfo_all_blocks=1 00:10:06.689 --rc geninfo_unexecuted_blocks=1 00:10:06.689 00:10:06.689 ' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:06.689 ************************************ 00:10:06.689 START TEST nvmf_example 00:10:06.689 ************************************ 00:10:06.689 10:51:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:06.689 * Looking for test storage... 00:10:06.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.690 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.690 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.690 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.973 --rc genhtml_branch_coverage=1 00:10:06.973 --rc genhtml_function_coverage=1 00:10:06.973 --rc genhtml_legend=1 00:10:06.973 --rc geninfo_all_blocks=1 00:10:06.973 --rc geninfo_unexecuted_blocks=1 00:10:06.973 00:10:06.973 ' 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.973 --rc genhtml_branch_coverage=1 00:10:06.973 --rc genhtml_function_coverage=1 00:10:06.973 --rc genhtml_legend=1 00:10:06.973 --rc geninfo_all_blocks=1 00:10:06.973 --rc geninfo_unexecuted_blocks=1 00:10:06.973 00:10:06.973 ' 00:10:06.973 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.974 --rc genhtml_branch_coverage=1 00:10:06.974 --rc genhtml_function_coverage=1 00:10:06.974 --rc genhtml_legend=1 00:10:06.974 --rc geninfo_all_blocks=1 00:10:06.974 --rc geninfo_unexecuted_blocks=1 00:10:06.974 00:10:06.974 ' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.974 --rc genhtml_branch_coverage=1 00:10:06.974 --rc genhtml_function_coverage=1 00:10:06.974 --rc genhtml_legend=1 00:10:06.974 --rc geninfo_all_blocks=1 00:10:06.974 --rc geninfo_unexecuted_blocks=1 00:10:06.974 00:10:06.974 ' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.974 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.236 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:15.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:15.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:15.237 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:15.237 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:10:15.237 00:10:15.237 --- 10.0.0.2 ping statistics --- 00:10:15.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.237 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:10:15.237 00:10:15.237 --- 10.0.0.1 ping statistics --- 00:10:15.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.237 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3127312 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3127312 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3127312 ']' 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.237 10:52:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.238 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.499 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.499 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:15.499 10:52:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:25.500 Initializing NVMe Controllers 00:10:25.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:25.500 Initialization complete. Launching workers. 00:10:25.500 ======================================================== 00:10:25.500 Latency(us) 00:10:25.500 Device Information : IOPS MiB/s Average min max 00:10:25.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18806.20 73.46 3404.48 861.96 19211.48 00:10:25.500 ======================================================== 00:10:25.500 Total : 18806.20 73.46 3404.48 861.96 19211.48 00:10:25.500 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.763 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.763 rmmod nvme_tcp 00:10:25.763 rmmod nvme_fabrics 00:10:25.763 rmmod nvme_keyring 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3127312 ']' 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3127312 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3127312 ']' 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3127312 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3127312 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3127312' 00:10:25.763 killing process with pid 3127312 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3127312 00:10:25.763 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3127312 00:10:26.026 nvmf threads initialize successfully 00:10:26.026 bdev subsystem init successfully 00:10:26.026 created a nvmf target service 00:10:26.026 create targets's poll groups done 00:10:26.026 all subsystems of target started 00:10:26.026 nvmf target is running 00:10:26.026 all subsystems of target stopped 00:10:26.026 destroy targets's poll groups done 00:10:26.026 destroyed the nvmf target service 00:10:26.026 bdev subsystem finish successfully 00:10:26.026 nvmf threads destroy successfully 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.026 10:52:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.943 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:27.943 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:27.943 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:27.943 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.943 00:10:27.943 real 0m21.394s 00:10:27.943 user 0m46.896s 00:10:27.943 sys 0m6.832s 00:10:27.943 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.943 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.943 ************************************ 00:10:27.943 END TEST nvmf_example 00:10:27.943 ************************************ 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.205 ************************************ 00:10:28.205 START TEST nvmf_filesystem 00:10:28.205 ************************************ 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.205 * Looking for test storage... 00:10:28.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.205 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.206 --rc genhtml_branch_coverage=1 00:10:28.206 --rc genhtml_function_coverage=1 00:10:28.206 --rc genhtml_legend=1 00:10:28.206 --rc geninfo_all_blocks=1 00:10:28.206 --rc geninfo_unexecuted_blocks=1 00:10:28.206 00:10:28.206 ' 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.206 --rc genhtml_branch_coverage=1 00:10:28.206 --rc genhtml_function_coverage=1 00:10:28.206 --rc genhtml_legend=1 00:10:28.206 --rc geninfo_all_blocks=1 00:10:28.206 --rc geninfo_unexecuted_blocks=1 00:10:28.206 00:10:28.206 ' 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.206 --rc genhtml_branch_coverage=1 00:10:28.206 --rc genhtml_function_coverage=1 00:10:28.206 --rc genhtml_legend=1 00:10:28.206 --rc geninfo_all_blocks=1 00:10:28.206 --rc geninfo_unexecuted_blocks=1 00:10:28.206 00:10:28.206 ' 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:28.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.206 --rc genhtml_branch_coverage=1 00:10:28.206 --rc genhtml_function_coverage=1 00:10:28.206 --rc genhtml_legend=1 00:10:28.206 --rc geninfo_all_blocks=1 00:10:28.206 --rc geninfo_unexecuted_blocks=1 00:10:28.206 00:10:28.206 ' 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:28.206 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.207 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:28.472 #define SPDK_CONFIG_H 00:10:28.472 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:28.472 #define SPDK_CONFIG_APPS 1 00:10:28.472 #define SPDK_CONFIG_ARCH native 00:10:28.472 #undef SPDK_CONFIG_ASAN 00:10:28.472 #undef SPDK_CONFIG_AVAHI 00:10:28.472 #undef SPDK_CONFIG_CET 00:10:28.472 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:28.472 #define SPDK_CONFIG_COVERAGE 1 00:10:28.472 #define SPDK_CONFIG_CROSS_PREFIX 00:10:28.472 #undef SPDK_CONFIG_CRYPTO 00:10:28.472 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:28.472 #undef SPDK_CONFIG_CUSTOMOCF 00:10:28.472 #undef SPDK_CONFIG_DAOS 00:10:28.472 #define SPDK_CONFIG_DAOS_DIR 00:10:28.472 #define SPDK_CONFIG_DEBUG 1 00:10:28.472 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:28.472 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.472 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:28.472 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:28.472 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:28.472 #undef SPDK_CONFIG_DPDK_UADK 00:10:28.472 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.472 #define SPDK_CONFIG_EXAMPLES 1 00:10:28.472 #undef SPDK_CONFIG_FC 00:10:28.472 #define SPDK_CONFIG_FC_PATH 00:10:28.472 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:28.472 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:28.472 #define SPDK_CONFIG_FSDEV 1 00:10:28.472 #undef SPDK_CONFIG_FUSE 00:10:28.472 #undef SPDK_CONFIG_FUZZER 00:10:28.472 #define SPDK_CONFIG_FUZZER_LIB 00:10:28.472 #undef SPDK_CONFIG_GOLANG 00:10:28.472 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:28.472 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:28.472 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:28.472 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:28.472 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:28.472 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:28.472 #undef SPDK_CONFIG_HAVE_LZ4 00:10:28.472 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:28.472 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:28.472 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:28.472 #define SPDK_CONFIG_IDXD 1 00:10:28.472 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:28.472 #undef SPDK_CONFIG_IPSEC_MB 00:10:28.472 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:28.472 #define SPDK_CONFIG_ISAL 1 00:10:28.472 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:28.472 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:28.472 #define SPDK_CONFIG_LIBDIR 00:10:28.472 #undef SPDK_CONFIG_LTO 00:10:28.472 #define SPDK_CONFIG_MAX_LCORES 128 00:10:28.472 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:28.472 #define SPDK_CONFIG_NVME_CUSE 1 00:10:28.472 #undef SPDK_CONFIG_OCF 00:10:28.472 #define SPDK_CONFIG_OCF_PATH 00:10:28.472 #define SPDK_CONFIG_OPENSSL_PATH 00:10:28.472 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:28.472 #define SPDK_CONFIG_PGO_DIR 00:10:28.472 #undef SPDK_CONFIG_PGO_USE 00:10:28.472 #define SPDK_CONFIG_PREFIX /usr/local 00:10:28.472 #undef SPDK_CONFIG_RAID5F 00:10:28.472 #undef SPDK_CONFIG_RBD 00:10:28.472 #define SPDK_CONFIG_RDMA 1 00:10:28.472 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:28.472 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:28.472 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:28.472 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:28.472 #define SPDK_CONFIG_SHARED 1 00:10:28.472 #undef SPDK_CONFIG_SMA 00:10:28.472 #define SPDK_CONFIG_TESTS 1 00:10:28.472 #undef SPDK_CONFIG_TSAN 00:10:28.472 #define SPDK_CONFIG_UBLK 1 00:10:28.472 #define SPDK_CONFIG_UBSAN 1 00:10:28.472 #undef SPDK_CONFIG_UNIT_TESTS 00:10:28.472 #undef SPDK_CONFIG_URING 00:10:28.472 #define SPDK_CONFIG_URING_PATH 00:10:28.472 #undef SPDK_CONFIG_URING_ZNS 00:10:28.472 #undef SPDK_CONFIG_USDT 00:10:28.472 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:28.472 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:28.472 #define SPDK_CONFIG_VFIO_USER 1 00:10:28.472 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:28.472 #define SPDK_CONFIG_VHOST 1 00:10:28.472 #define SPDK_CONFIG_VIRTIO 1 00:10:28.472 #undef SPDK_CONFIG_VTUNE 00:10:28.472 #define SPDK_CONFIG_VTUNE_DIR 00:10:28.472 #define SPDK_CONFIG_WERROR 1 00:10:28.472 #define SPDK_CONFIG_WPDK_DIR 00:10:28.472 #undef SPDK_CONFIG_XNVME 00:10:28.472 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.472 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:28.473 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3130106 ]] 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3130106 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.GWkqAk 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GWkqAk/tests/target /tmp/spdk.GWkqAk 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122526740480 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356541952 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6829801472 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668237824 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677310464 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:10:28.474 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=962560 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:28.475 * Looking for test storage... 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122526740480 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9044393984 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:28.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.475 --rc genhtml_branch_coverage=1 00:10:28.475 --rc genhtml_function_coverage=1 00:10:28.475 --rc genhtml_legend=1 00:10:28.475 --rc geninfo_all_blocks=1 00:10:28.475 --rc geninfo_unexecuted_blocks=1 00:10:28.475 00:10:28.475 ' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:28.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.475 --rc genhtml_branch_coverage=1 00:10:28.475 --rc genhtml_function_coverage=1 00:10:28.475 --rc genhtml_legend=1 00:10:28.475 --rc geninfo_all_blocks=1 00:10:28.475 --rc geninfo_unexecuted_blocks=1 00:10:28.475 00:10:28.475 ' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:28.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.475 --rc genhtml_branch_coverage=1 00:10:28.475 --rc genhtml_function_coverage=1 00:10:28.475 --rc genhtml_legend=1 00:10:28.475 --rc geninfo_all_blocks=1 00:10:28.475 --rc geninfo_unexecuted_blocks=1 00:10:28.475 00:10:28.475 ' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:28.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.475 --rc genhtml_branch_coverage=1 00:10:28.475 --rc genhtml_function_coverage=1 00:10:28.475 --rc genhtml_legend=1 00:10:28.475 --rc geninfo_all_blocks=1 00:10:28.475 --rc geninfo_unexecuted_blocks=1 00:10:28.475 00:10:28.475 ' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.475 10:52:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:36.625 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:36.625 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:36.625 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.625 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:36.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:10:36.626 00:10:36.626 --- 10.0.0.2 ping statistics --- 00:10:36.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.626 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:10:36.626 00:10:36.626 --- 10.0.0.1 ping statistics --- 00:10:36.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.626 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.626 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 ************************************ 00:10:36.626 START TEST nvmf_filesystem_no_in_capsule 00:10:36.626 ************************************ 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3133737 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3133737 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3133737 ']' 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 [2024-11-06 10:52:27.104998] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:10:36.626 [2024-11-06 10:52:27.105062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.626 [2024-11-06 10:52:27.187400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.626 [2024-11-06 10:52:27.229559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.626 [2024-11-06 10:52:27.229596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.626 [2024-11-06 10:52:27.229605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.626 [2024-11-06 10:52:27.229611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.626 [2024-11-06 10:52:27.229617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.626 [2024-11-06 10:52:27.231458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.626 [2024-11-06 10:52:27.231575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.626 [2024-11-06 10:52:27.231731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.626 [2024-11-06 10:52:27.231733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.626 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:36.627 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:36.627 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.627 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.627 [2024-11-06 10:52:27.946642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.627 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.627 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:36.627 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.627 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.888 Malloc1 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.888 [2024-11-06 10:52:28.085257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:36.888 { 00:10:36.888 "name": "Malloc1", 00:10:36.888 "aliases": [ 00:10:36.888 "cb0d20ab-f6bf-4922-82c2-04add1c8077c" 00:10:36.888 ], 00:10:36.888 "product_name": "Malloc disk", 00:10:36.888 "block_size": 512, 00:10:36.888 "num_blocks": 1048576, 00:10:36.888 "uuid": "cb0d20ab-f6bf-4922-82c2-04add1c8077c", 00:10:36.888 "assigned_rate_limits": { 00:10:36.888 "rw_ios_per_sec": 0, 00:10:36.888 "rw_mbytes_per_sec": 0, 00:10:36.888 "r_mbytes_per_sec": 0, 00:10:36.888 "w_mbytes_per_sec": 0 00:10:36.888 }, 00:10:36.888 "claimed": true, 00:10:36.888 "claim_type": "exclusive_write", 00:10:36.888 "zoned": false, 00:10:36.888 "supported_io_types": { 00:10:36.888 "read": true, 00:10:36.888 "write": true, 00:10:36.888 "unmap": true, 00:10:36.888 "flush": true, 00:10:36.888 "reset": true, 00:10:36.888 "nvme_admin": false, 00:10:36.888 "nvme_io": false, 00:10:36.888 "nvme_io_md": false, 00:10:36.888 "write_zeroes": true, 00:10:36.888 "zcopy": true, 00:10:36.888 "get_zone_info": false, 00:10:36.888 "zone_management": false, 00:10:36.888 "zone_append": false, 00:10:36.888 "compare": false, 00:10:36.888 "compare_and_write": false, 00:10:36.888 "abort": true, 00:10:36.888 "seek_hole": false, 00:10:36.888 "seek_data": false, 00:10:36.888 "copy": true, 00:10:36.888 "nvme_iov_md": false 00:10:36.888 }, 00:10:36.888 "memory_domains": [ 00:10:36.888 { 00:10:36.888 "dma_device_id": "system", 00:10:36.888 "dma_device_type": 1 00:10:36.888 }, 00:10:36.888 { 00:10:36.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.888 "dma_device_type": 2 00:10:36.888 } 00:10:36.888 ], 00:10:36.888 "driver_specific": {} 00:10:36.888 } 00:10:36.888 ]' 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:36.888 10:52:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.805 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.805 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:38.806 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.806 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:38.806 10:52:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:40.722 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:40.722 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:41.296 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.682 ************************************ 00:10:42.682 START TEST filesystem_ext4 00:10:42.682 ************************************ 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:42.682 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:42.682 mke2fs 1.47.0 (5-Feb-2023) 00:10:42.682 Discarding device blocks: 0/522240 done 00:10:42.682 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:42.682 Filesystem UUID: ba2893bc-e03b-43eb-bcab-78c3102fb72e 00:10:42.682 Superblock backups stored on blocks: 00:10:42.682 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:42.682 00:10:42.682 Allocating group tables: 0/64 done 00:10:42.682 Writing inode tables: 0/64 done 00:10:45.227 Creating journal (8192 blocks): done 00:10:45.227 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.227 00:10:45.227 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:45.227 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3133737 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.817 00:10:51.817 real 0m8.658s 00:10:51.817 user 0m0.025s 00:10:51.817 sys 0m0.085s 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:51.817 ************************************ 00:10:51.817 END TEST filesystem_ext4 00:10:51.817 ************************************ 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.817 ************************************ 00:10:51.817 START TEST filesystem_btrfs 00:10:51.817 ************************************ 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:51.817 btrfs-progs v6.8.1 00:10:51.817 See https://btrfs.readthedocs.io for more information. 00:10:51.817 00:10:51.817 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:51.817 NOTE: several default settings have changed in version 5.15, please make sure 00:10:51.817 this does not affect your deployments: 00:10:51.817 - DUP for metadata (-m dup) 00:10:51.817 - enabled no-holes (-O no-holes) 00:10:51.817 - enabled free-space-tree (-R free-space-tree) 00:10:51.817 00:10:51.817 Label: (null) 00:10:51.817 UUID: 753b858b-ccb9-49a2-aa7b-26555f11a25d 00:10:51.817 Node size: 16384 00:10:51.817 Sector size: 4096 (CPU page size: 4096) 00:10:51.817 Filesystem size: 510.00MiB 00:10:51.817 Block group profiles: 00:10:51.817 Data: single 8.00MiB 00:10:51.817 Metadata: DUP 32.00MiB 00:10:51.817 System: DUP 8.00MiB 00:10:51.817 SSD detected: yes 00:10:51.817 Zoned device: no 00:10:51.817 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:51.817 Checksum: crc32c 00:10:51.817 Number of devices: 1 00:10:51.817 Devices: 00:10:51.817 ID SIZE PATH 00:10:51.817 1 510.00MiB /dev/nvme0n1p1 00:10:51.817 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:51.817 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3133737 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.083 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.084 00:10:52.084 real 0m0.895s 00:10:52.084 user 0m0.018s 00:10:52.084 sys 0m0.133s 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.084 ************************************ 00:10:52.084 END TEST filesystem_btrfs 00:10:52.084 ************************************ 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.084 ************************************ 00:10:52.084 START TEST filesystem_xfs 00:10:52.084 ************************************ 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:52.084 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:52.346 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:52.346 = sectsz=512 attr=2, projid32bit=1 00:10:52.346 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:52.346 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:52.346 data = bsize=4096 blocks=130560, imaxpct=25 00:10:52.346 = sunit=0 swidth=0 blks 00:10:52.346 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:52.346 log =internal log bsize=4096 blocks=16384, version=2 00:10:52.346 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:52.346 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.288 Discarding blocks...Done. 00:10:53.288 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:53.288 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3133737 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.203 00:10:55.203 real 0m3.057s 00:10:55.203 user 0m0.029s 00:10:55.203 sys 0m0.076s 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:55.203 ************************************ 00:10:55.203 END TEST filesystem_xfs 00:10:55.203 ************************************ 00:10:55.203 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3133737 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3133737 ']' 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3133737 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.466 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3133737 00:10:55.726 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.727 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.727 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3133737' 00:10:55.727 killing process with pid 3133737 00:10:55.727 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3133737 00:10:55.727 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3133737 00:10:55.727 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:55.727 00:10:55.727 real 0m20.067s 00:10:55.727 user 1m19.330s 00:10:55.727 sys 0m1.443s 00:10:55.727 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.727 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.727 ************************************ 00:10:55.727 END TEST nvmf_filesystem_no_in_capsule 00:10:55.727 ************************************ 00:10:55.727 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:55.727 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:55.727 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.727 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.988 ************************************ 00:10:55.988 START TEST nvmf_filesystem_in_capsule 00:10:55.988 ************************************ 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3137996 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3137996 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3137996 ']' 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.988 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.988 [2024-11-06 10:52:47.247650] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:10:55.988 [2024-11-06 10:52:47.247699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.988 [2024-11-06 10:52:47.324799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.988 [2024-11-06 10:52:47.360638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.988 [2024-11-06 10:52:47.360673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.988 [2024-11-06 10:52:47.360681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.988 [2024-11-06 10:52:47.360688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.988 [2024-11-06 10:52:47.360693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.988 [2024-11-06 10:52:47.362176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.988 [2024-11-06 10:52:47.362293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.988 [2024-11-06 10:52:47.362446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.988 [2024-11-06 10:52:47.362447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.932 [2024-11-06 10:52:48.092141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.932 Malloc1 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.932 [2024-11-06 10:52:48.217472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.932 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:56.932 { 00:10:56.932 "name": "Malloc1", 00:10:56.932 "aliases": [ 00:10:56.932 "803037ea-8b14-473f-b470-49a0492db140" 00:10:56.932 ], 00:10:56.932 "product_name": "Malloc disk", 00:10:56.932 "block_size": 512, 00:10:56.932 "num_blocks": 1048576, 00:10:56.932 "uuid": "803037ea-8b14-473f-b470-49a0492db140", 00:10:56.932 "assigned_rate_limits": { 00:10:56.932 "rw_ios_per_sec": 0, 00:10:56.932 "rw_mbytes_per_sec": 0, 00:10:56.932 "r_mbytes_per_sec": 0, 00:10:56.932 "w_mbytes_per_sec": 0 00:10:56.932 }, 00:10:56.932 "claimed": true, 00:10:56.932 "claim_type": "exclusive_write", 00:10:56.932 "zoned": false, 00:10:56.932 "supported_io_types": { 00:10:56.932 "read": true, 00:10:56.932 "write": true, 00:10:56.932 "unmap": true, 00:10:56.932 "flush": true, 00:10:56.932 "reset": true, 00:10:56.932 "nvme_admin": false, 00:10:56.932 "nvme_io": false, 00:10:56.932 "nvme_io_md": false, 00:10:56.932 "write_zeroes": true, 00:10:56.933 "zcopy": true, 00:10:56.933 "get_zone_info": false, 00:10:56.933 "zone_management": false, 00:10:56.933 "zone_append": false, 00:10:56.933 "compare": false, 00:10:56.933 "compare_and_write": false, 00:10:56.933 "abort": true, 00:10:56.933 "seek_hole": false, 00:10:56.933 "seek_data": false, 00:10:56.933 "copy": true, 00:10:56.933 "nvme_iov_md": false 00:10:56.933 }, 00:10:56.933 "memory_domains": [ 00:10:56.933 { 00:10:56.933 "dma_device_id": "system", 00:10:56.933 "dma_device_type": 1 00:10:56.933 }, 00:10:56.933 { 00:10:56.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.933 "dma_device_type": 2 00:10:56.933 } 00:10:56.933 ], 00:10:56.933 "driver_specific": {} 00:10:56.933 } 00:10:56.933 ]' 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:56.933 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.849 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.849 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:58.849 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.849 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:58.849 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:00.761 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:00.761 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.023 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.410 ************************************ 00:11:02.410 START TEST filesystem_in_capsule_ext4 00:11:02.410 ************************************ 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:02.410 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.410 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.410 Discarding device blocks: 0/522240 done 00:11:02.410 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.410 Filesystem UUID: dc630d9b-6ba8-4ce2-9018-c50c8920dd4d 00:11:02.410 Superblock backups stored on blocks: 00:11:02.410 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.410 00:11:02.410 Allocating group tables: 0/64 done 00:11:02.410 Writing inode tables: 0/64 done 00:11:02.410 Creating journal (8192 blocks): done 00:11:03.795 Writing superblocks and filesystem accounting information: 0/64 done 00:11:03.795 00:11:03.795 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:03.795 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.382 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.382 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3137996 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.382 00:11:10.382 real 0m7.604s 00:11:10.382 user 0m0.024s 00:11:10.382 sys 0m0.080s 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:10.382 ************************************ 00:11:10.382 END TEST filesystem_in_capsule_ext4 00:11:10.382 ************************************ 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.382 ************************************ 00:11:10.382 START TEST filesystem_in_capsule_btrfs 00:11:10.382 ************************************ 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:10.382 btrfs-progs v6.8.1 00:11:10.382 See https://btrfs.readthedocs.io for more information. 00:11:10.382 00:11:10.382 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:10.382 NOTE: several default settings have changed in version 5.15, please make sure 00:11:10.382 this does not affect your deployments: 00:11:10.382 - DUP for metadata (-m dup) 00:11:10.382 - enabled no-holes (-O no-holes) 00:11:10.382 - enabled free-space-tree (-R free-space-tree) 00:11:10.382 00:11:10.382 Label: (null) 00:11:10.382 UUID: 3d0b73cc-da03-4131-beaf-1f5f626287ff 00:11:10.382 Node size: 16384 00:11:10.382 Sector size: 4096 (CPU page size: 4096) 00:11:10.382 Filesystem size: 510.00MiB 00:11:10.382 Block group profiles: 00:11:10.382 Data: single 8.00MiB 00:11:10.382 Metadata: DUP 32.00MiB 00:11:10.382 System: DUP 8.00MiB 00:11:10.382 SSD detected: yes 00:11:10.382 Zoned device: no 00:11:10.382 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:10.382 Checksum: crc32c 00:11:10.382 Number of devices: 1 00:11:10.382 Devices: 00:11:10.382 ID SIZE PATH 00:11:10.382 1 510.00MiB /dev/nvme0n1p1 00:11:10.382 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.382 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3137996 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.644 00:11:10.644 real 0m0.723s 00:11:10.644 user 0m0.033s 00:11:10.644 sys 0m0.115s 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.644 ************************************ 00:11:10.644 END TEST filesystem_in_capsule_btrfs 00:11:10.644 ************************************ 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.644 ************************************ 00:11:10.644 START TEST filesystem_in_capsule_xfs 00:11:10.644 ************************************ 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.644 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.644 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.644 = sectsz=512 attr=2, projid32bit=1 00:11:10.644 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.644 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.644 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.644 = sunit=0 swidth=0 blks 00:11:10.644 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.644 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.644 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.644 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.587 Discarding blocks...Done. 00:11:11.587 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:11.587 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3137996 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.134 00:11:14.134 real 0m3.356s 00:11:14.134 user 0m0.026s 00:11:14.134 sys 0m0.079s 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.134 ************************************ 00:11:14.134 END TEST filesystem_in_capsule_xfs 00:11:14.134 ************************************ 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:14.134 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:14.393 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3137996 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3137996 ']' 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3137996 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3137996 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3137996' 00:11:14.654 killing process with pid 3137996 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3137996 00:11:14.654 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3137996 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:14.916 00:11:14.916 real 0m18.994s 00:11:14.916 user 1m15.129s 00:11:14.916 sys 0m1.428s 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.916 ************************************ 00:11:14.916 END TEST nvmf_filesystem_in_capsule 00:11:14.916 ************************************ 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.916 rmmod nvme_tcp 00:11:14.916 rmmod nvme_fabrics 00:11:14.916 rmmod nvme_keyring 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.916 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.568 00:11:17.568 real 0m48.957s 00:11:17.568 user 2m36.608s 00:11:17.568 sys 0m8.563s 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.568 ************************************ 00:11:17.568 END TEST nvmf_filesystem 00:11:17.568 ************************************ 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.568 ************************************ 00:11:17.568 START TEST nvmf_target_discovery 00:11:17.568 ************************************ 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.568 * Looking for test storage... 00:11:17.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:17.568 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:17.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.569 --rc genhtml_branch_coverage=1 00:11:17.569 --rc genhtml_function_coverage=1 00:11:17.569 --rc genhtml_legend=1 00:11:17.569 --rc geninfo_all_blocks=1 00:11:17.569 --rc geninfo_unexecuted_blocks=1 00:11:17.569 00:11:17.569 ' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:17.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.569 --rc genhtml_branch_coverage=1 00:11:17.569 --rc genhtml_function_coverage=1 00:11:17.569 --rc genhtml_legend=1 00:11:17.569 --rc geninfo_all_blocks=1 00:11:17.569 --rc geninfo_unexecuted_blocks=1 00:11:17.569 00:11:17.569 ' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:17.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.569 --rc genhtml_branch_coverage=1 00:11:17.569 --rc genhtml_function_coverage=1 00:11:17.569 --rc genhtml_legend=1 00:11:17.569 --rc geninfo_all_blocks=1 00:11:17.569 --rc geninfo_unexecuted_blocks=1 00:11:17.569 00:11:17.569 ' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:17.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.569 --rc genhtml_branch_coverage=1 00:11:17.569 --rc genhtml_function_coverage=1 00:11:17.569 --rc genhtml_legend=1 00:11:17.569 --rc geninfo_all_blocks=1 00:11:17.569 --rc geninfo_unexecuted_blocks=1 00:11:17.569 00:11:17.569 ' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:17.569 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.570 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:24.158 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:24.158 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.158 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:24.159 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:24.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.159 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:11:24.422 00:11:24.422 --- 10.0.0.2 ping statistics --- 00:11:24.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.422 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:11:24.422 00:11:24.422 --- 10.0.0.1 ping statistics --- 00:11:24.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.422 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3145919 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3145919 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3145919 ']' 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:24.422 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.422 [2024-11-06 10:53:15.713142] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:11:24.422 [2024-11-06 10:53:15.713214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.422 [2024-11-06 10:53:15.798176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.422 [2024-11-06 10:53:15.841772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.422 [2024-11-06 10:53:15.841810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.422 [2024-11-06 10:53:15.841819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.422 [2024-11-06 10:53:15.841826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.422 [2024-11-06 10:53:15.841831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.683 [2024-11-06 10:53:15.843514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.683 [2024-11-06 10:53:15.843631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.683 [2024-11-06 10:53:15.843801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.683 [2024-11-06 10:53:15.843801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 [2024-11-06 10:53:16.571834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 Null1 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 [2024-11-06 10:53:16.632142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 Null2 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.255 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.516 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.516 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.516 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 Null3 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 Null4 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.517 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:25.778 00:11:25.778 Discovery Log Number of Records 6, Generation counter 6 00:11:25.778 =====Discovery Log Entry 0====== 00:11:25.778 trtype: tcp 00:11:25.778 adrfam: ipv4 00:11:25.778 subtype: current discovery subsystem 00:11:25.778 treq: not required 00:11:25.778 portid: 0 00:11:25.778 trsvcid: 4420 00:11:25.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:25.778 traddr: 10.0.0.2 00:11:25.778 eflags: explicit discovery connections, duplicate discovery information 00:11:25.778 sectype: none 00:11:25.778 =====Discovery Log Entry 1====== 00:11:25.778 trtype: tcp 00:11:25.778 adrfam: ipv4 00:11:25.778 subtype: nvme subsystem 00:11:25.778 treq: not required 00:11:25.778 portid: 0 00:11:25.778 trsvcid: 4420 00:11:25.778 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:25.778 traddr: 10.0.0.2 00:11:25.778 eflags: none 00:11:25.778 sectype: none 00:11:25.778 =====Discovery Log Entry 2====== 00:11:25.778 trtype: tcp 00:11:25.778 adrfam: ipv4 00:11:25.778 subtype: nvme subsystem 00:11:25.778 treq: not required 00:11:25.778 portid: 0 00:11:25.778 trsvcid: 4420 00:11:25.778 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:25.778 traddr: 10.0.0.2 00:11:25.778 eflags: none 00:11:25.778 sectype: none 00:11:25.778 =====Discovery Log Entry 3====== 00:11:25.778 trtype: tcp 00:11:25.778 adrfam: ipv4 00:11:25.778 subtype: nvme subsystem 00:11:25.778 treq: not required 00:11:25.778 portid: 0 00:11:25.778 trsvcid: 4420 00:11:25.778 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:25.778 traddr: 10.0.0.2 00:11:25.778 eflags: none 00:11:25.778 sectype: none 00:11:25.778 =====Discovery Log Entry 4====== 00:11:25.778 trtype: tcp 00:11:25.778 adrfam: ipv4 00:11:25.778 subtype: nvme subsystem 00:11:25.778 treq: not required 00:11:25.778 portid: 0 00:11:25.778 trsvcid: 4420 00:11:25.778 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:25.778 traddr: 10.0.0.2 00:11:25.778 eflags: none 00:11:25.778 sectype: none 00:11:25.778 =====Discovery Log Entry 5====== 00:11:25.778 trtype: tcp 00:11:25.778 adrfam: ipv4 00:11:25.778 subtype: discovery subsystem referral 00:11:25.778 treq: not required 00:11:25.778 portid: 0 00:11:25.778 trsvcid: 4430 00:11:25.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:25.778 traddr: 10.0.0.2 00:11:25.778 eflags: none 00:11:25.778 sectype: none 00:11:25.778 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:25.778 Perform nvmf subsystem discovery via RPC 00:11:25.778 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:25.778 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.778 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.778 [ 00:11:25.778 { 00:11:25.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:25.778 "subtype": "Discovery", 00:11:25.778 "listen_addresses": [ 00:11:25.778 { 00:11:25.778 "trtype": "TCP", 00:11:25.778 "adrfam": "IPv4", 00:11:25.778 "traddr": "10.0.0.2", 00:11:25.778 "trsvcid": "4420" 00:11:25.778 } 00:11:25.778 ], 00:11:25.778 "allow_any_host": true, 00:11:25.778 "hosts": [] 00:11:25.778 }, 00:11:25.778 { 00:11:25.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.778 "subtype": "NVMe", 00:11:25.778 "listen_addresses": [ 00:11:25.778 { 00:11:25.778 "trtype": "TCP", 00:11:25.778 "adrfam": "IPv4", 00:11:25.778 "traddr": "10.0.0.2", 00:11:25.778 "trsvcid": "4420" 00:11:25.778 } 00:11:25.778 ], 00:11:25.778 "allow_any_host": true, 00:11:25.778 "hosts": [], 00:11:25.778 "serial_number": "SPDK00000000000001", 00:11:25.778 "model_number": "SPDK bdev Controller", 00:11:25.778 "max_namespaces": 32, 00:11:25.778 "min_cntlid": 1, 00:11:25.778 "max_cntlid": 65519, 00:11:25.778 "namespaces": [ 00:11:25.778 { 00:11:25.778 "nsid": 1, 00:11:25.778 "bdev_name": "Null1", 00:11:25.778 "name": "Null1", 00:11:25.778 "nguid": "866819A3C8F14218A6F1197173CFA1ED", 00:11:25.778 "uuid": "866819a3-c8f1-4218-a6f1-197173cfa1ed" 00:11:25.778 } 00:11:25.778 ] 00:11:25.778 }, 00:11:25.778 { 00:11:25.778 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:25.779 "subtype": "NVMe", 00:11:25.779 "listen_addresses": [ 00:11:25.779 { 00:11:25.779 "trtype": "TCP", 00:11:25.779 "adrfam": "IPv4", 00:11:25.779 "traddr": "10.0.0.2", 00:11:25.779 "trsvcid": "4420" 00:11:25.779 } 00:11:25.779 ], 00:11:25.779 "allow_any_host": true, 00:11:25.779 "hosts": [], 00:11:25.779 "serial_number": "SPDK00000000000002", 00:11:25.779 "model_number": "SPDK bdev Controller", 00:11:25.779 "max_namespaces": 32, 00:11:25.779 "min_cntlid": 1, 00:11:25.779 "max_cntlid": 65519, 00:11:25.779 "namespaces": [ 00:11:25.779 { 00:11:25.779 "nsid": 1, 00:11:25.779 "bdev_name": "Null2", 00:11:25.779 "name": "Null2", 00:11:25.779 "nguid": "D556F5BE6067437998254D41AD47B486", 00:11:25.779 "uuid": "d556f5be-6067-4379-9825-4d41ad47b486" 00:11:25.779 } 00:11:25.779 ] 00:11:25.779 }, 00:11:25.779 { 00:11:25.779 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:25.779 "subtype": "NVMe", 00:11:25.779 "listen_addresses": [ 00:11:25.779 { 00:11:25.779 "trtype": "TCP", 00:11:25.779 "adrfam": "IPv4", 00:11:25.779 "traddr": "10.0.0.2", 00:11:25.779 "trsvcid": "4420" 00:11:25.779 } 00:11:25.779 ], 00:11:25.779 "allow_any_host": true, 00:11:25.779 "hosts": [], 00:11:25.779 "serial_number": "SPDK00000000000003", 00:11:25.779 "model_number": "SPDK bdev Controller", 00:11:25.779 "max_namespaces": 32, 00:11:25.779 "min_cntlid": 1, 00:11:25.779 "max_cntlid": 65519, 00:11:25.779 "namespaces": [ 00:11:25.779 { 00:11:25.779 "nsid": 1, 00:11:25.779 "bdev_name": "Null3", 00:11:25.779 "name": "Null3", 00:11:25.779 "nguid": "7CFF82BD38924861808CB4821A7C8D74", 00:11:25.779 "uuid": "7cff82bd-3892-4861-808c-b4821a7c8d74" 00:11:25.779 } 00:11:25.779 ] 00:11:25.779 }, 00:11:25.779 { 00:11:25.779 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:25.779 "subtype": "NVMe", 00:11:25.779 "listen_addresses": [ 00:11:25.779 { 00:11:25.779 "trtype": "TCP", 00:11:25.779 "adrfam": "IPv4", 00:11:25.779 "traddr": "10.0.0.2", 00:11:25.779 "trsvcid": "4420" 00:11:25.779 } 00:11:25.779 ], 00:11:25.779 "allow_any_host": true, 00:11:25.779 "hosts": [], 00:11:25.779 "serial_number": "SPDK00000000000004", 00:11:25.779 "model_number": "SPDK bdev Controller", 00:11:25.779 "max_namespaces": 32, 00:11:25.779 "min_cntlid": 1, 00:11:25.779 "max_cntlid": 65519, 00:11:25.779 "namespaces": [ 00:11:25.779 { 00:11:25.779 "nsid": 1, 00:11:25.779 "bdev_name": "Null4", 00:11:25.779 "name": "Null4", 00:11:25.779 "nguid": "A357E5D6295145A59F62746E17EC834D", 00:11:25.779 "uuid": "a357e5d6-2951-45a5-9f62-746e17ec834d" 00:11:25.779 } 00:11:25.779 ] 00:11:25.779 } 00:11:25.779 ] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.779 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.040 rmmod nvme_tcp 00:11:26.040 rmmod nvme_fabrics 00:11:26.040 rmmod nvme_keyring 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3145919 ']' 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3145919 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3145919 ']' 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3145919 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3145919 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3145919' 00:11:26.040 killing process with pid 3145919 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3145919 00:11:26.040 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3145919 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.300 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.213 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.213 00:11:28.213 real 0m11.094s 00:11:28.213 user 0m8.626s 00:11:28.213 sys 0m5.768s 00:11:28.213 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.213 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 ************************************ 00:11:28.213 END TEST nvmf_target_discovery 00:11:28.213 ************************************ 00:11:28.213 10:53:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:28.213 10:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:28.213 10:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.213 10:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.473 ************************************ 00:11:28.473 START TEST nvmf_referrals 00:11:28.473 ************************************ 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:28.473 * Looking for test storage... 00:11:28.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.473 --rc genhtml_branch_coverage=1 00:11:28.473 --rc genhtml_function_coverage=1 00:11:28.473 --rc genhtml_legend=1 00:11:28.473 --rc geninfo_all_blocks=1 00:11:28.473 --rc geninfo_unexecuted_blocks=1 00:11:28.473 00:11:28.473 ' 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.473 --rc genhtml_branch_coverage=1 00:11:28.473 --rc genhtml_function_coverage=1 00:11:28.473 --rc genhtml_legend=1 00:11:28.473 --rc geninfo_all_blocks=1 00:11:28.473 --rc geninfo_unexecuted_blocks=1 00:11:28.473 00:11:28.473 ' 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.473 --rc genhtml_branch_coverage=1 00:11:28.473 --rc genhtml_function_coverage=1 00:11:28.473 --rc genhtml_legend=1 00:11:28.473 --rc geninfo_all_blocks=1 00:11:28.473 --rc geninfo_unexecuted_blocks=1 00:11:28.473 00:11:28.473 ' 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.473 --rc genhtml_branch_coverage=1 00:11:28.473 --rc genhtml_function_coverage=1 00:11:28.473 --rc genhtml_legend=1 00:11:28.473 --rc geninfo_all_blocks=1 00:11:28.473 --rc geninfo_unexecuted_blocks=1 00:11:28.473 00:11:28.473 ' 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.473 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.474 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.734 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:28.734 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:28.734 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.734 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:36.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:36.881 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.881 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:36.882 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:36.882 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.882 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:11:36.882 00:11:36.882 --- 10.0.0.2 ping statistics --- 00:11:36.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.882 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:11:36.882 00:11:36.882 --- 10.0.0.1 ping statistics --- 00:11:36.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.882 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.882 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3150610 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3150610 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3150610 ']' 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.883 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 [2024-11-06 10:53:27.280337] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:11:36.883 [2024-11-06 10:53:27.280409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.883 [2024-11-06 10:53:27.364066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.883 [2024-11-06 10:53:27.405905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.883 [2024-11-06 10:53:27.405940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.883 [2024-11-06 10:53:27.405948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.883 [2024-11-06 10:53:27.405955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.883 [2024-11-06 10:53:27.405961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.883 [2024-11-06 10:53:27.407811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.883 [2024-11-06 10:53:27.408089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.883 [2024-11-06 10:53:27.408243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.883 [2024-11-06 10:53:27.408244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 [2024-11-06 10:53:28.131176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 [2024-11-06 10:53:28.147373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.883 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.884 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.145 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.404 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.404 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:37.404 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:37.404 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.405 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.405 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.405 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.405 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.668 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.668 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:37.668 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:37.668 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:37.668 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:37.668 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.929 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.190 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.450 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.711 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.711 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:38.711 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:38.711 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.711 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.711 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.711 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.711 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.972 rmmod nvme_tcp 00:11:38.972 rmmod nvme_fabrics 00:11:38.972 rmmod nvme_keyring 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3150610 ']' 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3150610 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3150610 ']' 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3150610 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3150610 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3150610' 00:11:38.972 killing process with pid 3150610 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3150610 00:11:38.972 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3150610 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.233 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.147 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:41.408 00:11:41.408 real 0m12.924s 00:11:41.408 user 0m15.490s 00:11:41.408 sys 0m6.267s 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.408 ************************************ 00:11:41.408 END TEST nvmf_referrals 00:11:41.408 ************************************ 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.408 ************************************ 00:11:41.408 START TEST nvmf_connect_disconnect 00:11:41.408 ************************************ 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:41.408 * Looking for test storage... 00:11:41.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:41.408 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:41.670 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:41.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.671 --rc genhtml_branch_coverage=1 00:11:41.671 --rc genhtml_function_coverage=1 00:11:41.671 --rc genhtml_legend=1 00:11:41.671 --rc geninfo_all_blocks=1 00:11:41.671 --rc geninfo_unexecuted_blocks=1 00:11:41.671 00:11:41.671 ' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:41.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.671 --rc genhtml_branch_coverage=1 00:11:41.671 --rc genhtml_function_coverage=1 00:11:41.671 --rc genhtml_legend=1 00:11:41.671 --rc geninfo_all_blocks=1 00:11:41.671 --rc geninfo_unexecuted_blocks=1 00:11:41.671 00:11:41.671 ' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:41.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.671 --rc genhtml_branch_coverage=1 00:11:41.671 --rc genhtml_function_coverage=1 00:11:41.671 --rc genhtml_legend=1 00:11:41.671 --rc geninfo_all_blocks=1 00:11:41.671 --rc geninfo_unexecuted_blocks=1 00:11:41.671 00:11:41.671 ' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:41.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.671 --rc genhtml_branch_coverage=1 00:11:41.671 --rc genhtml_function_coverage=1 00:11:41.671 --rc genhtml_legend=1 00:11:41.671 --rc geninfo_all_blocks=1 00:11:41.671 --rc geninfo_unexecuted_blocks=1 00:11:41.671 00:11:41.671 ' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.671 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.672 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:49.820 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:49.820 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:49.820 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.820 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:49.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.821 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:11:49.821 00:11:49.821 --- 10.0.0.2 ping statistics --- 00:11:49.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.821 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:11:49.821 00:11:49.821 --- 10.0.0.1 ping statistics --- 00:11:49.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.821 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3155383 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3155383 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3155383 ']' 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.821 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.821 [2024-11-06 10:53:40.256658] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:11:49.821 [2024-11-06 10:53:40.256731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.821 [2024-11-06 10:53:40.339710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.821 [2024-11-06 10:53:40.381733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.821 [2024-11-06 10:53:40.381775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.821 [2024-11-06 10:53:40.381784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.821 [2024-11-06 10:53:40.381791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.821 [2024-11-06 10:53:40.381797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.821 [2024-11-06 10:53:40.383635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.821 [2024-11-06 10:53:40.383757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.821 [2024-11-06 10:53:40.383867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.821 [2024-11-06 10:53:40.384000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.821 [2024-11-06 10:53:41.103028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.821 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.822 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.822 [2024-11-06 10:53:41.181068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.822 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.822 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:49.822 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:49.822 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:54.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.150 rmmod nvme_tcp 00:12:08.150 rmmod nvme_fabrics 00:12:08.150 rmmod nvme_keyring 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3155383 ']' 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3155383 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3155383 ']' 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3155383 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:08.150 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3155383 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3155383' 00:12:08.412 killing process with pid 3155383 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3155383 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3155383 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.412 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:10.957 00:12:10.957 real 0m29.185s 00:12:10.957 user 1m19.070s 00:12:10.957 sys 0m7.101s 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.957 ************************************ 00:12:10.957 END TEST nvmf_connect_disconnect 00:12:10.957 ************************************ 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.957 ************************************ 00:12:10.957 START TEST nvmf_multitarget 00:12:10.957 ************************************ 00:12:10.957 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:10.957 * Looking for test storage... 00:12:10.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:10.957 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:10.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.958 --rc genhtml_branch_coverage=1 00:12:10.958 --rc genhtml_function_coverage=1 00:12:10.958 --rc genhtml_legend=1 00:12:10.958 --rc geninfo_all_blocks=1 00:12:10.958 --rc geninfo_unexecuted_blocks=1 00:12:10.958 00:12:10.958 ' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:10.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.958 --rc genhtml_branch_coverage=1 00:12:10.958 --rc genhtml_function_coverage=1 00:12:10.958 --rc genhtml_legend=1 00:12:10.958 --rc geninfo_all_blocks=1 00:12:10.958 --rc geninfo_unexecuted_blocks=1 00:12:10.958 00:12:10.958 ' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:10.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.958 --rc genhtml_branch_coverage=1 00:12:10.958 --rc genhtml_function_coverage=1 00:12:10.958 --rc genhtml_legend=1 00:12:10.958 --rc geninfo_all_blocks=1 00:12:10.958 --rc geninfo_unexecuted_blocks=1 00:12:10.958 00:12:10.958 ' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:10.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.958 --rc genhtml_branch_coverage=1 00:12:10.958 --rc genhtml_function_coverage=1 00:12:10.958 --rc genhtml_legend=1 00:12:10.958 --rc geninfo_all_blocks=1 00:12:10.958 --rc geninfo_unexecuted_blocks=1 00:12:10.958 00:12:10.958 ' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.958 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:19.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:19.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:19.105 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:19.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:19.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:12:19.106 00:12:19.106 --- 10.0.0.2 ping statistics --- 00:12:19.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.106 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:12:19.106 00:12:19.106 --- 10.0.0.1 ping statistics --- 00:12:19.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.106 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3163831 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3163831 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3163831 ']' 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:19.106 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.106 [2024-11-06 10:54:09.668219] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:12:19.106 [2024-11-06 10:54:09.668280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.106 [2024-11-06 10:54:09.753523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.106 [2024-11-06 10:54:09.796273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.106 [2024-11-06 10:54:09.796312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.106 [2024-11-06 10:54:09.796324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.106 [2024-11-06 10:54:09.796331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.106 [2024-11-06 10:54:09.796337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.106 [2024-11-06 10:54:09.800771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.106 [2024-11-06 10:54:09.801028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.106 [2024-11-06 10:54:09.801186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.106 [2024-11-06 10:54:09.801186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.106 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.106 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:19.106 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.106 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:19.106 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.106 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.106 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:19.107 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:19.107 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:19.367 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:19.367 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:19.367 "nvmf_tgt_1" 00:12:19.367 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:19.628 "nvmf_tgt_2" 00:12:19.628 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:19.628 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:19.628 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:19.628 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:19.628 true 00:12:19.628 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:19.888 true 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:19.888 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:19.889 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:19.889 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.889 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:19.889 rmmod nvme_tcp 00:12:19.889 rmmod nvme_fabrics 00:12:19.889 rmmod nvme_keyring 00:12:19.889 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3163831 ']' 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3163831 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3163831 ']' 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3163831 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3163831 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3163831' 00:12:20.149 killing process with pid 3163831 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3163831 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3163831 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.149 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.694 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.694 00:12:22.695 real 0m11.670s 00:12:22.695 user 0m9.817s 00:12:22.695 sys 0m6.040s 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.695 ************************************ 00:12:22.695 END TEST nvmf_multitarget 00:12:22.695 ************************************ 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.695 ************************************ 00:12:22.695 START TEST nvmf_rpc 00:12:22.695 ************************************ 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:22.695 * Looking for test storage... 00:12:22.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.695 --rc genhtml_branch_coverage=1 00:12:22.695 --rc genhtml_function_coverage=1 00:12:22.695 --rc genhtml_legend=1 00:12:22.695 --rc geninfo_all_blocks=1 00:12:22.695 --rc geninfo_unexecuted_blocks=1 00:12:22.695 00:12:22.695 ' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.695 --rc genhtml_branch_coverage=1 00:12:22.695 --rc genhtml_function_coverage=1 00:12:22.695 --rc genhtml_legend=1 00:12:22.695 --rc geninfo_all_blocks=1 00:12:22.695 --rc geninfo_unexecuted_blocks=1 00:12:22.695 00:12:22.695 ' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.695 --rc genhtml_branch_coverage=1 00:12:22.695 --rc genhtml_function_coverage=1 00:12:22.695 --rc genhtml_legend=1 00:12:22.695 --rc geninfo_all_blocks=1 00:12:22.695 --rc geninfo_unexecuted_blocks=1 00:12:22.695 00:12:22.695 ' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:22.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.695 --rc genhtml_branch_coverage=1 00:12:22.695 --rc genhtml_function_coverage=1 00:12:22.695 --rc genhtml_legend=1 00:12:22.695 --rc geninfo_all_blocks=1 00:12:22.695 --rc geninfo_unexecuted_blocks=1 00:12:22.695 00:12:22.695 ' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:22.695 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.696 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.389 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:29.390 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:29.390 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:29.390 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:29.390 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.390 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:12:29.651 00:12:29.651 --- 10.0.0.2 ping statistics --- 00:12:29.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.651 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:12:29.651 00:12:29.651 --- 10.0.0.1 ping statistics --- 00:12:29.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.651 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.651 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3168578 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3168578 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3168578 ']' 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.651 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.651 [2024-11-06 10:54:21.061852] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:12:29.651 [2024-11-06 10:54:21.061924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.912 [2024-11-06 10:54:21.146540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.912 [2024-11-06 10:54:21.188784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.912 [2024-11-06 10:54:21.188820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.912 [2024-11-06 10:54:21.188828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.912 [2024-11-06 10:54:21.188835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.912 [2024-11-06 10:54:21.188841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.912 [2024-11-06 10:54:21.190456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.912 [2024-11-06 10:54:21.190590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.912 [2024-11-06 10:54:21.190757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.912 [2024-11-06 10:54:21.190767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.483 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:30.483 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:30.483 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:30.483 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.483 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:30.743 "tick_rate": 2400000000, 00:12:30.743 "poll_groups": [ 00:12:30.743 { 00:12:30.743 "name": "nvmf_tgt_poll_group_000", 00:12:30.743 "admin_qpairs": 0, 00:12:30.743 "io_qpairs": 0, 00:12:30.743 "current_admin_qpairs": 0, 00:12:30.743 "current_io_qpairs": 0, 00:12:30.743 "pending_bdev_io": 0, 00:12:30.743 "completed_nvme_io": 0, 00:12:30.743 "transports": [] 00:12:30.743 }, 00:12:30.743 { 00:12:30.743 "name": "nvmf_tgt_poll_group_001", 00:12:30.743 "admin_qpairs": 0, 00:12:30.743 "io_qpairs": 0, 00:12:30.743 "current_admin_qpairs": 0, 00:12:30.743 "current_io_qpairs": 0, 00:12:30.743 "pending_bdev_io": 0, 00:12:30.743 "completed_nvme_io": 0, 00:12:30.743 "transports": [] 00:12:30.743 }, 00:12:30.743 { 00:12:30.743 "name": "nvmf_tgt_poll_group_002", 00:12:30.743 "admin_qpairs": 0, 00:12:30.743 "io_qpairs": 0, 00:12:30.743 "current_admin_qpairs": 0, 00:12:30.743 "current_io_qpairs": 0, 00:12:30.743 "pending_bdev_io": 0, 00:12:30.743 "completed_nvme_io": 0, 00:12:30.743 "transports": [] 00:12:30.743 }, 00:12:30.743 { 00:12:30.743 "name": "nvmf_tgt_poll_group_003", 00:12:30.743 "admin_qpairs": 0, 00:12:30.743 "io_qpairs": 0, 00:12:30.743 "current_admin_qpairs": 0, 00:12:30.743 "current_io_qpairs": 0, 00:12:30.743 "pending_bdev_io": 0, 00:12:30.743 "completed_nvme_io": 0, 00:12:30.743 "transports": [] 00:12:30.743 } 00:12:30.743 ] 00:12:30.743 }' 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:30.743 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 [2024-11-06 10:54:22.034054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.743 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:30.743 "tick_rate": 2400000000, 00:12:30.743 "poll_groups": [ 00:12:30.743 { 00:12:30.743 "name": "nvmf_tgt_poll_group_000", 00:12:30.744 "admin_qpairs": 0, 00:12:30.744 "io_qpairs": 0, 00:12:30.744 "current_admin_qpairs": 0, 00:12:30.744 "current_io_qpairs": 0, 00:12:30.744 "pending_bdev_io": 0, 00:12:30.744 "completed_nvme_io": 0, 00:12:30.744 "transports": [ 00:12:30.744 { 00:12:30.744 "trtype": "TCP" 00:12:30.744 } 00:12:30.744 ] 00:12:30.744 }, 00:12:30.744 { 00:12:30.744 "name": "nvmf_tgt_poll_group_001", 00:12:30.744 "admin_qpairs": 0, 00:12:30.744 "io_qpairs": 0, 00:12:30.744 "current_admin_qpairs": 0, 00:12:30.744 "current_io_qpairs": 0, 00:12:30.744 "pending_bdev_io": 0, 00:12:30.744 "completed_nvme_io": 0, 00:12:30.744 "transports": [ 00:12:30.744 { 00:12:30.744 "trtype": "TCP" 00:12:30.744 } 00:12:30.744 ] 00:12:30.744 }, 00:12:30.744 { 00:12:30.744 "name": "nvmf_tgt_poll_group_002", 00:12:30.744 "admin_qpairs": 0, 00:12:30.744 "io_qpairs": 0, 00:12:30.744 "current_admin_qpairs": 0, 00:12:30.744 "current_io_qpairs": 0, 00:12:30.744 "pending_bdev_io": 0, 00:12:30.744 "completed_nvme_io": 0, 00:12:30.744 "transports": [ 00:12:30.744 { 00:12:30.744 "trtype": "TCP" 00:12:30.744 } 00:12:30.744 ] 00:12:30.744 }, 00:12:30.744 { 00:12:30.744 "name": "nvmf_tgt_poll_group_003", 00:12:30.744 "admin_qpairs": 0, 00:12:30.744 "io_qpairs": 0, 00:12:30.744 "current_admin_qpairs": 0, 00:12:30.744 "current_io_qpairs": 0, 00:12:30.744 "pending_bdev_io": 0, 00:12:30.744 "completed_nvme_io": 0, 00:12:30.744 "transports": [ 00:12:30.744 { 00:12:30.744 "trtype": "TCP" 00:12:30.744 } 00:12:30.744 ] 00:12:30.744 } 00:12:30.744 ] 00:12:30.744 }' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:30.744 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.004 Malloc1 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.004 [2024-11-06 10:54:22.236055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:31.004 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:31.005 [2024-11-06 10:54:22.273043] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:31.005 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:31.005 could not add new controller: failed to write to nvme-fabrics device 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.005 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.388 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.388 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:32.388 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.388 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:32.388 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.934 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.934 [2024-11-06 10:54:26.009223] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:34.934 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.934 could not add new controller: failed to write to nvme-fabrics device 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.934 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.320 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.320 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:36.320 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.320 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:36.320 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:38.235 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:38.235 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:38.235 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.235 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:38.235 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.235 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:38.235 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.494 [2024-11-06 10:54:29.772036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.494 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.495 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.495 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.408 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.408 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:40.408 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.408 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:40.408 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.323 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.324 [2024-11-06 10:54:33.526385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.324 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.711 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.711 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:43.711 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.711 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:43.711 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.258 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.259 [2024-11-06 10:54:37.278750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.259 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.645 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.645 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:47.645 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.645 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:47.645 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:49.561 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.823 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:49.823 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.823 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:49.823 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.823 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.823 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.823 [2024-11-06 10:54:41.037682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.823 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.211 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.211 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:51.211 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.211 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:51.211 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:53.757 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.758 [2024-11-06 10:54:44.759729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.758 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.144 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.144 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:55.144 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.144 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:55.144 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:57.058 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.320 [2024-11-06 10:54:48.529450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.320 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 [2024-11-06 10:54:48.593598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 [2024-11-06 10:54:48.661799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 [2024-11-06 10:54:48.734056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.321 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 [2024-11-06 10:54:48.798234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.583 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:57.583 "tick_rate": 2400000000, 00:12:57.583 "poll_groups": [ 00:12:57.583 { 00:12:57.583 "name": "nvmf_tgt_poll_group_000", 00:12:57.583 "admin_qpairs": 0, 00:12:57.583 "io_qpairs": 224, 00:12:57.583 "current_admin_qpairs": 0, 00:12:57.583 "current_io_qpairs": 0, 00:12:57.583 "pending_bdev_io": 0, 00:12:57.583 "completed_nvme_io": 227, 00:12:57.583 "transports": [ 00:12:57.583 { 00:12:57.583 "trtype": "TCP" 00:12:57.583 } 00:12:57.583 ] 00:12:57.583 }, 00:12:57.583 { 00:12:57.583 "name": "nvmf_tgt_poll_group_001", 00:12:57.583 "admin_qpairs": 1, 00:12:57.583 "io_qpairs": 223, 00:12:57.583 "current_admin_qpairs": 0, 00:12:57.583 "current_io_qpairs": 0, 00:12:57.583 "pending_bdev_io": 0, 00:12:57.583 "completed_nvme_io": 226, 00:12:57.583 "transports": [ 00:12:57.584 { 00:12:57.584 "trtype": "TCP" 00:12:57.584 } 00:12:57.584 ] 00:12:57.584 }, 00:12:57.584 { 00:12:57.584 "name": "nvmf_tgt_poll_group_002", 00:12:57.584 "admin_qpairs": 6, 00:12:57.584 "io_qpairs": 218, 00:12:57.584 "current_admin_qpairs": 0, 00:12:57.584 "current_io_qpairs": 0, 00:12:57.584 "pending_bdev_io": 0, 00:12:57.584 "completed_nvme_io": 268, 00:12:57.584 "transports": [ 00:12:57.584 { 00:12:57.584 "trtype": "TCP" 00:12:57.584 } 00:12:57.584 ] 00:12:57.584 }, 00:12:57.584 { 00:12:57.584 "name": "nvmf_tgt_poll_group_003", 00:12:57.584 "admin_qpairs": 0, 00:12:57.584 "io_qpairs": 224, 00:12:57.584 "current_admin_qpairs": 0, 00:12:57.584 "current_io_qpairs": 0, 00:12:57.584 "pending_bdev_io": 0, 00:12:57.584 "completed_nvme_io": 518, 00:12:57.584 "transports": [ 00:12:57.584 { 00:12:57.584 "trtype": "TCP" 00:12:57.584 } 00:12:57.584 ] 00:12:57.584 } 00:12:57.584 ] 00:12:57.584 }' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.584 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.584 rmmod nvme_tcp 00:12:57.584 rmmod nvme_fabrics 00:12:57.584 rmmod nvme_keyring 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3168578 ']' 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3168578 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3168578 ']' 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3168578 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3168578 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3168578' 00:12:57.845 killing process with pid 3168578 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3168578 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3168578 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.845 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.391 00:13:00.391 real 0m37.626s 00:13:00.391 user 1m53.807s 00:13:00.391 sys 0m7.700s 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.391 ************************************ 00:13:00.391 END TEST nvmf_rpc 00:13:00.391 ************************************ 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.391 ************************************ 00:13:00.391 START TEST nvmf_invalid 00:13:00.391 ************************************ 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.391 * Looking for test storage... 00:13:00.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.391 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:00.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.392 --rc genhtml_branch_coverage=1 00:13:00.392 --rc genhtml_function_coverage=1 00:13:00.392 --rc genhtml_legend=1 00:13:00.392 --rc geninfo_all_blocks=1 00:13:00.392 --rc geninfo_unexecuted_blocks=1 00:13:00.392 00:13:00.392 ' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:00.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.392 --rc genhtml_branch_coverage=1 00:13:00.392 --rc genhtml_function_coverage=1 00:13:00.392 --rc genhtml_legend=1 00:13:00.392 --rc geninfo_all_blocks=1 00:13:00.392 --rc geninfo_unexecuted_blocks=1 00:13:00.392 00:13:00.392 ' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:00.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.392 --rc genhtml_branch_coverage=1 00:13:00.392 --rc genhtml_function_coverage=1 00:13:00.392 --rc genhtml_legend=1 00:13:00.392 --rc geninfo_all_blocks=1 00:13:00.392 --rc geninfo_unexecuted_blocks=1 00:13:00.392 00:13:00.392 ' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:00.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.392 --rc genhtml_branch_coverage=1 00:13:00.392 --rc genhtml_function_coverage=1 00:13:00.392 --rc genhtml_legend=1 00:13:00.392 --rc geninfo_all_blocks=1 00:13:00.392 --rc geninfo_unexecuted_blocks=1 00:13:00.392 00:13:00.392 ' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.392 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:08.536 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:08.536 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:08.536 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:08.536 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:13:08.536 00:13:08.536 --- 10.0.0.2 ping statistics --- 00:13:08.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.536 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:13:08.536 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:13:08.536 00:13:08.536 --- 10.0.0.1 ping statistics --- 00:13:08.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.536 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3178329 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3178329 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3178329 ']' 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.536 [2024-11-06 10:54:59.114313] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:13:08.536 [2024-11-06 10:54:59.114382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.536 [2024-11-06 10:54:59.198035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.536 [2024-11-06 10:54:59.242823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.536 [2024-11-06 10:54:59.242859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.536 [2024-11-06 10:54:59.242867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.536 [2024-11-06 10:54:59.242874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.536 [2024-11-06 10:54:59.242880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.536 [2024-11-06 10:54:59.244482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.536 [2024-11-06 10:54:59.244604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.536 [2024-11-06 10:54:59.244785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.536 [2024-11-06 10:54:59.244785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:08.536 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.797 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.797 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:08.797 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12853 00:13:08.797 [2024-11-06 10:55:00.124037] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:08.797 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:08.797 { 00:13:08.797 "nqn": "nqn.2016-06.io.spdk:cnode12853", 00:13:08.797 "tgt_name": "foobar", 00:13:08.797 "method": "nvmf_create_subsystem", 00:13:08.797 "req_id": 1 00:13:08.797 } 00:13:08.797 Got JSON-RPC error response 00:13:08.797 response: 00:13:08.797 { 00:13:08.797 "code": -32603, 00:13:08.797 "message": "Unable to find target foobar" 00:13:08.797 }' 00:13:08.797 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:08.797 { 00:13:08.797 "nqn": "nqn.2016-06.io.spdk:cnode12853", 00:13:08.797 "tgt_name": "foobar", 00:13:08.797 "method": "nvmf_create_subsystem", 00:13:08.797 "req_id": 1 00:13:08.797 } 00:13:08.797 Got JSON-RPC error response 00:13:08.797 response: 00:13:08.797 { 00:13:08.797 "code": -32603, 00:13:08.797 "message": "Unable to find target foobar" 00:13:08.797 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:08.797 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:08.797 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29401 00:13:09.057 [2024-11-06 10:55:00.316699] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29401: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:09.057 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:09.057 { 00:13:09.057 "nqn": "nqn.2016-06.io.spdk:cnode29401", 00:13:09.057 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.057 "method": "nvmf_create_subsystem", 00:13:09.057 "req_id": 1 00:13:09.057 } 00:13:09.057 Got JSON-RPC error response 00:13:09.057 response: 00:13:09.057 { 00:13:09.057 "code": -32602, 00:13:09.057 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.057 }' 00:13:09.057 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:09.057 { 00:13:09.057 "nqn": "nqn.2016-06.io.spdk:cnode29401", 00:13:09.057 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.057 "method": "nvmf_create_subsystem", 00:13:09.057 "req_id": 1 00:13:09.057 } 00:13:09.057 Got JSON-RPC error response 00:13:09.057 response: 00:13:09.057 { 00:13:09.057 "code": -32602, 00:13:09.057 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.057 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:09.057 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:09.057 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17303 00:13:09.318 [2024-11-06 10:55:00.509308] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17303: invalid model number 'SPDK_Controller' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:09.318 { 00:13:09.318 "nqn": "nqn.2016-06.io.spdk:cnode17303", 00:13:09.318 "model_number": "SPDK_Controller\u001f", 00:13:09.318 "method": "nvmf_create_subsystem", 00:13:09.318 "req_id": 1 00:13:09.318 } 00:13:09.318 Got JSON-RPC error response 00:13:09.318 response: 00:13:09.318 { 00:13:09.318 "code": -32602, 00:13:09.318 "message": "Invalid MN SPDK_Controller\u001f" 00:13:09.318 }' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:09.318 { 00:13:09.318 "nqn": "nqn.2016-06.io.spdk:cnode17303", 00:13:09.318 "model_number": "SPDK_Controller\u001f", 00:13:09.318 "method": "nvmf_create_subsystem", 00:13:09.318 "req_id": 1 00:13:09.318 } 00:13:09.318 Got JSON-RPC error response 00:13:09.318 response: 00:13:09.318 { 00:13:09.318 "code": -32602, 00:13:09.318 "message": "Invalid MN SPDK_Controller\u001f" 00:13:09.318 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:09.318 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'HX)k{v"+Gn7,9V9nZ.m#' 00:13:09.319 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'HX)k{v"+Gn7,9V9nZ.m#' nqn.2016-06.io.spdk:cnode10378 00:13:09.580 [2024-11-06 10:55:00.862480] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10378: invalid serial number 'HX)k{v"+Gn7,9V9nZ.m#' 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:09.580 { 00:13:09.580 "nqn": "nqn.2016-06.io.spdk:cnode10378", 00:13:09.580 "serial_number": "HX)k{v\u007f\"+Gn7,9V9nZ.m#", 00:13:09.580 "method": "nvmf_create_subsystem", 00:13:09.580 "req_id": 1 00:13:09.580 } 00:13:09.580 Got JSON-RPC error response 00:13:09.580 response: 00:13:09.580 { 00:13:09.580 "code": -32602, 00:13:09.580 "message": "Invalid SN HX)k{v\u007f\"+Gn7,9V9nZ.m#" 00:13:09.580 }' 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:09.580 { 00:13:09.580 "nqn": "nqn.2016-06.io.spdk:cnode10378", 00:13:09.580 "serial_number": "HX)k{v\u007f\"+Gn7,9V9nZ.m#", 00:13:09.580 "method": "nvmf_create_subsystem", 00:13:09.580 "req_id": 1 00:13:09.580 } 00:13:09.580 Got JSON-RPC error response 00:13:09.580 response: 00:13:09.580 { 00:13:09.580 "code": -32602, 00:13:09.580 "message": "Invalid SN HX)k{v\u007f\"+Gn7,9V9nZ.m#" 00:13:09.580 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:09.580 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.581 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:09.843 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:09.843 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.844 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.845 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:13:09.845 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fzL4#Z4PywLr,=T`xpV4q'\''i[AeYIRXe#Lmjs?\}RL' 00:13:09.845 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'fzL4#Z4PywLr,=T`xpV4q'\''i[AeYIRXe#Lmjs?\}RL' nqn.2016-06.io.spdk:cnode12193 00:13:10.105 [2024-11-06 10:55:01.376145] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12193: invalid model number 'fzL4#Z4PywLr,=T`xpV4q'i[AeYIRXe#Lmjs?\}RL' 00:13:10.105 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:10.105 { 00:13:10.105 "nqn": "nqn.2016-06.io.spdk:cnode12193", 00:13:10.105 "model_number": "fzL4#Z4PywLr,=T`xpV4q'\''i[AeYIRXe#Lmjs?\\}RL", 00:13:10.105 "method": "nvmf_create_subsystem", 00:13:10.105 "req_id": 1 00:13:10.105 } 00:13:10.105 Got JSON-RPC error response 00:13:10.105 response: 00:13:10.105 { 00:13:10.106 "code": -32602, 00:13:10.106 "message": "Invalid MN fzL4#Z4PywLr,=T`xpV4q'\''i[AeYIRXe#Lmjs?\\}RL" 00:13:10.106 }' 00:13:10.106 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:10.106 { 00:13:10.106 "nqn": "nqn.2016-06.io.spdk:cnode12193", 00:13:10.106 "model_number": "fzL4#Z4PywLr,=T`xpV4q'i[AeYIRXe#Lmjs?\\}RL", 00:13:10.106 "method": "nvmf_create_subsystem", 00:13:10.106 "req_id": 1 00:13:10.106 } 00:13:10.106 Got JSON-RPC error response 00:13:10.106 response: 00:13:10.106 { 00:13:10.106 "code": -32602, 00:13:10.106 "message": "Invalid MN fzL4#Z4PywLr,=T`xpV4q'i[AeYIRXe#Lmjs?\\}RL" 00:13:10.106 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:10.106 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:10.366 [2024-11-06 10:55:01.560829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.366 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:10.366 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:10.366 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:10.366 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:10.366 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:10.626 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:10.626 [2024-11-06 10:55:01.937999] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:10.626 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:10.626 { 00:13:10.626 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:10.626 "listen_address": { 00:13:10.626 "trtype": "tcp", 00:13:10.626 "traddr": "", 00:13:10.626 "trsvcid": "4421" 00:13:10.626 }, 00:13:10.626 "method": "nvmf_subsystem_remove_listener", 00:13:10.626 "req_id": 1 00:13:10.626 } 00:13:10.626 Got JSON-RPC error response 00:13:10.626 response: 00:13:10.626 { 00:13:10.626 "code": -32602, 00:13:10.626 "message": "Invalid parameters" 00:13:10.626 }' 00:13:10.626 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:10.626 { 00:13:10.626 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:10.626 "listen_address": { 00:13:10.626 "trtype": "tcp", 00:13:10.626 "traddr": "", 00:13:10.626 "trsvcid": "4421" 00:13:10.626 }, 00:13:10.626 "method": "nvmf_subsystem_remove_listener", 00:13:10.626 "req_id": 1 00:13:10.626 } 00:13:10.626 Got JSON-RPC error response 00:13:10.626 response: 00:13:10.626 { 00:13:10.626 "code": -32602, 00:13:10.626 "message": "Invalid parameters" 00:13:10.626 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:10.626 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31735 -i 0 00:13:10.887 [2024-11-06 10:55:02.122533] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31735: invalid cntlid range [0-65519] 00:13:10.887 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:10.887 { 00:13:10.887 "nqn": "nqn.2016-06.io.spdk:cnode31735", 00:13:10.887 "min_cntlid": 0, 00:13:10.887 "method": "nvmf_create_subsystem", 00:13:10.887 "req_id": 1 00:13:10.887 } 00:13:10.887 Got JSON-RPC error response 00:13:10.887 response: 00:13:10.887 { 00:13:10.887 "code": -32602, 00:13:10.887 "message": "Invalid cntlid range [0-65519]" 00:13:10.887 }' 00:13:10.887 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:10.887 { 00:13:10.887 "nqn": "nqn.2016-06.io.spdk:cnode31735", 00:13:10.887 "min_cntlid": 0, 00:13:10.887 "method": "nvmf_create_subsystem", 00:13:10.887 "req_id": 1 00:13:10.887 } 00:13:10.887 Got JSON-RPC error response 00:13:10.887 response: 00:13:10.887 { 00:13:10.887 "code": -32602, 00:13:10.887 "message": "Invalid cntlid range [0-65519]" 00:13:10.887 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.887 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7332 -i 65520 00:13:11.148 [2024-11-06 10:55:02.311159] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7332: invalid cntlid range [65520-65519] 00:13:11.148 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:11.148 { 00:13:11.148 "nqn": "nqn.2016-06.io.spdk:cnode7332", 00:13:11.148 "min_cntlid": 65520, 00:13:11.148 "method": "nvmf_create_subsystem", 00:13:11.148 "req_id": 1 00:13:11.148 } 00:13:11.148 Got JSON-RPC error response 00:13:11.148 response: 00:13:11.148 { 00:13:11.148 "code": -32602, 00:13:11.148 "message": "Invalid cntlid range [65520-65519]" 00:13:11.148 }' 00:13:11.148 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:11.148 { 00:13:11.148 "nqn": "nqn.2016-06.io.spdk:cnode7332", 00:13:11.148 "min_cntlid": 65520, 00:13:11.148 "method": "nvmf_create_subsystem", 00:13:11.148 "req_id": 1 00:13:11.148 } 00:13:11.148 Got JSON-RPC error response 00:13:11.148 response: 00:13:11.148 { 00:13:11.148 "code": -32602, 00:13:11.148 "message": "Invalid cntlid range [65520-65519]" 00:13:11.148 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.148 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8007 -I 0 00:13:11.148 [2024-11-06 10:55:02.495707] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8007: invalid cntlid range [1-0] 00:13:11.148 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:11.148 { 00:13:11.148 "nqn": "nqn.2016-06.io.spdk:cnode8007", 00:13:11.148 "max_cntlid": 0, 00:13:11.148 "method": "nvmf_create_subsystem", 00:13:11.148 "req_id": 1 00:13:11.148 } 00:13:11.148 Got JSON-RPC error response 00:13:11.148 response: 00:13:11.148 { 00:13:11.148 "code": -32602, 00:13:11.148 "message": "Invalid cntlid range [1-0]" 00:13:11.148 }' 00:13:11.148 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:11.148 { 00:13:11.148 "nqn": "nqn.2016-06.io.spdk:cnode8007", 00:13:11.148 "max_cntlid": 0, 00:13:11.148 "method": "nvmf_create_subsystem", 00:13:11.148 "req_id": 1 00:13:11.148 } 00:13:11.148 Got JSON-RPC error response 00:13:11.148 response: 00:13:11.148 { 00:13:11.148 "code": -32602, 00:13:11.148 "message": "Invalid cntlid range [1-0]" 00:13:11.148 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.148 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21086 -I 65520 00:13:11.409 [2024-11-06 10:55:02.684313] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21086: invalid cntlid range [1-65520] 00:13:11.409 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:11.409 { 00:13:11.409 "nqn": "nqn.2016-06.io.spdk:cnode21086", 00:13:11.409 "max_cntlid": 65520, 00:13:11.409 "method": "nvmf_create_subsystem", 00:13:11.409 "req_id": 1 00:13:11.409 } 00:13:11.409 Got JSON-RPC error response 00:13:11.409 response: 00:13:11.409 { 00:13:11.409 "code": -32602, 00:13:11.409 "message": "Invalid cntlid range [1-65520]" 00:13:11.409 }' 00:13:11.409 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:11.409 { 00:13:11.409 "nqn": "nqn.2016-06.io.spdk:cnode21086", 00:13:11.409 "max_cntlid": 65520, 00:13:11.409 "method": "nvmf_create_subsystem", 00:13:11.409 "req_id": 1 00:13:11.409 } 00:13:11.409 Got JSON-RPC error response 00:13:11.409 response: 00:13:11.409 { 00:13:11.409 "code": -32602, 00:13:11.409 "message": "Invalid cntlid range [1-65520]" 00:13:11.409 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.409 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7967 -i 6 -I 5 00:13:11.670 [2024-11-06 10:55:02.872924] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7967: invalid cntlid range [6-5] 00:13:11.670 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:11.670 { 00:13:11.670 "nqn": "nqn.2016-06.io.spdk:cnode7967", 00:13:11.670 "min_cntlid": 6, 00:13:11.670 "max_cntlid": 5, 00:13:11.670 "method": "nvmf_create_subsystem", 00:13:11.670 "req_id": 1 00:13:11.670 } 00:13:11.670 Got JSON-RPC error response 00:13:11.670 response: 00:13:11.670 { 00:13:11.670 "code": -32602, 00:13:11.670 "message": "Invalid cntlid range [6-5]" 00:13:11.670 }' 00:13:11.670 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:11.670 { 00:13:11.670 "nqn": "nqn.2016-06.io.spdk:cnode7967", 00:13:11.670 "min_cntlid": 6, 00:13:11.670 "max_cntlid": 5, 00:13:11.670 "method": "nvmf_create_subsystem", 00:13:11.670 "req_id": 1 00:13:11.670 } 00:13:11.670 Got JSON-RPC error response 00:13:11.670 response: 00:13:11.670 { 00:13:11.670 "code": -32602, 00:13:11.670 "message": "Invalid cntlid range [6-5]" 00:13:11.670 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.670 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:11.670 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:11.670 { 00:13:11.670 "name": "foobar", 00:13:11.670 "method": "nvmf_delete_target", 00:13:11.670 "req_id": 1 00:13:11.670 } 00:13:11.670 Got JSON-RPC error response 00:13:11.670 response: 00:13:11.670 { 00:13:11.670 "code": -32602, 00:13:11.670 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:11.670 }' 00:13:11.670 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:11.670 { 00:13:11.670 "name": "foobar", 00:13:11.670 "method": "nvmf_delete_target", 00:13:11.670 "req_id": 1 00:13:11.670 } 00:13:11.670 Got JSON-RPC error response 00:13:11.670 response: 00:13:11.670 { 00:13:11.670 "code": -32602, 00:13:11.670 "message": "The specified target doesn't exist, cannot delete it." 00:13:11.670 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:11.670 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:11.670 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:11.670 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.670 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:11.670 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.671 rmmod nvme_tcp 00:13:11.671 rmmod nvme_fabrics 00:13:11.671 rmmod nvme_keyring 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3178329 ']' 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3178329 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3178329 ']' 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3178329 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:11.671 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3178329 00:13:11.932 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:11.932 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:11.932 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3178329' 00:13:11.932 killing process with pid 3178329 00:13:11.932 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3178329 00:13:11.932 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3178329 00:13:11.932 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.933 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.482 00:13:14.482 real 0m13.970s 00:13:14.482 user 0m20.609s 00:13:14.482 sys 0m6.608s 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.482 ************************************ 00:13:14.482 END TEST nvmf_invalid 00:13:14.482 ************************************ 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.482 ************************************ 00:13:14.482 START TEST nvmf_connect_stress 00:13:14.482 ************************************ 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:14.482 * Looking for test storage... 00:13:14.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:14.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.482 --rc genhtml_branch_coverage=1 00:13:14.482 --rc genhtml_function_coverage=1 00:13:14.482 --rc genhtml_legend=1 00:13:14.482 --rc geninfo_all_blocks=1 00:13:14.482 --rc geninfo_unexecuted_blocks=1 00:13:14.482 00:13:14.482 ' 00:13:14.482 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:14.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.483 --rc genhtml_branch_coverage=1 00:13:14.483 --rc genhtml_function_coverage=1 00:13:14.483 --rc genhtml_legend=1 00:13:14.483 --rc geninfo_all_blocks=1 00:13:14.483 --rc geninfo_unexecuted_blocks=1 00:13:14.483 00:13:14.483 ' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:14.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.483 --rc genhtml_branch_coverage=1 00:13:14.483 --rc genhtml_function_coverage=1 00:13:14.483 --rc genhtml_legend=1 00:13:14.483 --rc geninfo_all_blocks=1 00:13:14.483 --rc geninfo_unexecuted_blocks=1 00:13:14.483 00:13:14.483 ' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:14.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.483 --rc genhtml_branch_coverage=1 00:13:14.483 --rc genhtml_function_coverage=1 00:13:14.483 --rc genhtml_legend=1 00:13:14.483 --rc geninfo_all_blocks=1 00:13:14.483 --rc geninfo_unexecuted_blocks=1 00:13:14.483 00:13:14.483 ' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.483 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.625 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:22.626 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:22.626 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:22.626 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:22.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.626 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:13:22.626 00:13:22.626 --- 10.0.0.2 ping statistics --- 00:13:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.626 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:22.626 00:13:22.626 --- 10.0.0.1 ping statistics --- 00:13:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.626 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.626 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3183513 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3183513 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3183513 ']' 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.627 [2024-11-06 10:55:13.188434] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:13:22.627 [2024-11-06 10:55:13.188499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.627 [2024-11-06 10:55:13.288685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.627 [2024-11-06 10:55:13.340856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.627 [2024-11-06 10:55:13.340914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.627 [2024-11-06 10:55:13.340923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.627 [2024-11-06 10:55:13.340930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.627 [2024-11-06 10:55:13.340936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.627 [2024-11-06 10:55:13.342992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.627 [2024-11-06 10:55:13.343161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.627 [2024-11-06 10:55:13.343161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.627 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.627 [2024-11-06 10:55:14.036365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.627 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.888 [2024-11-06 10:55:14.060854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.888 NULL1 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3183839 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.888 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.149 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.149 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:23.149 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.149 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.149 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.720 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.720 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:23.720 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.720 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.720 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.981 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:23.981 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.981 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.981 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.241 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.241 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:24.241 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.241 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.241 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.501 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.501 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:24.501 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.501 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.501 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.762 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.762 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:24.762 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.762 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.762 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.332 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.332 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:25.332 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.332 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.332 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.593 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:25.593 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.593 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.593 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.853 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.853 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:25.854 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.854 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.854 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.114 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.114 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:26.114 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.114 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.114 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.379 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.379 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:26.379 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.379 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.379 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.685 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.685 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:26.685 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.685 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.685 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.026 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.026 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:27.026 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.026 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.026 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.599 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.599 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:27.599 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.599 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.599 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.868 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.868 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:27.868 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.868 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.868 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.127 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.128 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:28.128 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.128 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.128 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.388 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.388 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:28.388 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.388 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.388 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.649 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.649 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:28.649 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.649 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.649 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.220 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.221 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:29.221 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.221 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.221 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.481 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.481 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:29.481 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.481 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.481 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.742 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.742 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:29.742 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.743 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.743 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.003 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.003 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:30.003 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.003 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.003 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.571 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.571 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:30.571 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.571 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.571 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.830 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.830 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:30.830 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.830 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.830 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.091 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.091 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:31.091 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.091 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.091 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.351 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.351 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:31.351 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.351 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.351 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.612 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.612 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:31.612 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.612 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.612 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.183 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.183 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:32.183 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.183 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.183 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.445 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.445 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:32.445 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.445 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.445 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.707 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.707 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:32.707 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.707 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.707 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.968 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3183839 00:13:32.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3183839) - No such process 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3183839 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.968 rmmod nvme_tcp 00:13:32.968 rmmod nvme_fabrics 00:13:32.968 rmmod nvme_keyring 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3183513 ']' 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3183513 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3183513 ']' 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3183513 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:32.968 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3183513 00:13:33.228 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:33.228 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:33.228 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3183513' 00:13:33.228 killing process with pid 3183513 00:13:33.228 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3183513 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3183513 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.229 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.775 00:13:35.775 real 0m21.188s 00:13:35.775 user 0m42.305s 00:13:35.775 sys 0m9.049s 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.775 ************************************ 00:13:35.775 END TEST nvmf_connect_stress 00:13:35.775 ************************************ 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.775 ************************************ 00:13:35.775 START TEST nvmf_fused_ordering 00:13:35.775 ************************************ 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:35.775 * Looking for test storage... 00:13:35.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.775 --rc genhtml_branch_coverage=1 00:13:35.775 --rc genhtml_function_coverage=1 00:13:35.775 --rc genhtml_legend=1 00:13:35.775 --rc geninfo_all_blocks=1 00:13:35.775 --rc geninfo_unexecuted_blocks=1 00:13:35.775 00:13:35.775 ' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.775 --rc genhtml_branch_coverage=1 00:13:35.775 --rc genhtml_function_coverage=1 00:13:35.775 --rc genhtml_legend=1 00:13:35.775 --rc geninfo_all_blocks=1 00:13:35.775 --rc geninfo_unexecuted_blocks=1 00:13:35.775 00:13:35.775 ' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.775 --rc genhtml_branch_coverage=1 00:13:35.775 --rc genhtml_function_coverage=1 00:13:35.775 --rc genhtml_legend=1 00:13:35.775 --rc geninfo_all_blocks=1 00:13:35.775 --rc geninfo_unexecuted_blocks=1 00:13:35.775 00:13:35.775 ' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.775 --rc genhtml_branch_coverage=1 00:13:35.775 --rc genhtml_function_coverage=1 00:13:35.775 --rc genhtml_legend=1 00:13:35.775 --rc geninfo_all_blocks=1 00:13:35.775 --rc geninfo_unexecuted_blocks=1 00:13:35.775 00:13:35.775 ' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.775 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.776 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:43.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:43.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:43.916 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:43.916 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.916 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:13:43.917 00:13:43.917 --- 10.0.0.2 ping statistics --- 00:13:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.917 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:13:43.917 00:13:43.917 --- 10.0.0.1 ping statistics --- 00:13:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.917 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3190050 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3190050 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3190050 ']' 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:43.917 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 [2024-11-06 10:55:34.452725] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:13:43.917 [2024-11-06 10:55:34.452803] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.917 [2024-11-06 10:55:34.553635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.917 [2024-11-06 10:55:34.606617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.917 [2024-11-06 10:55:34.606671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.917 [2024-11-06 10:55:34.606679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.917 [2024-11-06 10:55:34.606687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.917 [2024-11-06 10:55:34.606694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.917 [2024-11-06 10:55:34.607447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 [2024-11-06 10:55:35.302530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.917 [2024-11-06 10:55:35.326795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.917 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 NULL1 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.177 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:44.177 [2024-11-06 10:55:35.400080] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:13:44.177 [2024-11-06 10:55:35.400142] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190232 ] 00:13:44.438 Attached to nqn.2016-06.io.spdk:cnode1 00:13:44.438 Namespace ID: 1 size: 1GB 00:13:44.438 fused_ordering(0) 00:13:44.438 fused_ordering(1) 00:13:44.438 fused_ordering(2) 00:13:44.438 fused_ordering(3) 00:13:44.438 fused_ordering(4) 00:13:44.438 fused_ordering(5) 00:13:44.438 fused_ordering(6) 00:13:44.438 fused_ordering(7) 00:13:44.438 fused_ordering(8) 00:13:44.438 fused_ordering(9) 00:13:44.438 fused_ordering(10) 00:13:44.438 fused_ordering(11) 00:13:44.438 fused_ordering(12) 00:13:44.438 fused_ordering(13) 00:13:44.438 fused_ordering(14) 00:13:44.438 fused_ordering(15) 00:13:44.438 fused_ordering(16) 00:13:44.438 fused_ordering(17) 00:13:44.438 fused_ordering(18) 00:13:44.438 fused_ordering(19) 00:13:44.438 fused_ordering(20) 00:13:44.438 fused_ordering(21) 00:13:44.438 fused_ordering(22) 00:13:44.438 fused_ordering(23) 00:13:44.438 fused_ordering(24) 00:13:44.438 fused_ordering(25) 00:13:44.438 fused_ordering(26) 00:13:44.438 fused_ordering(27) 00:13:44.438 fused_ordering(28) 00:13:44.438 fused_ordering(29) 00:13:44.438 fused_ordering(30) 00:13:44.438 fused_ordering(31) 00:13:44.438 fused_ordering(32) 00:13:44.438 fused_ordering(33) 00:13:44.438 fused_ordering(34) 00:13:44.438 fused_ordering(35) 00:13:44.438 fused_ordering(36) 00:13:44.438 fused_ordering(37) 00:13:44.438 fused_ordering(38) 00:13:44.438 fused_ordering(39) 00:13:44.438 fused_ordering(40) 00:13:44.438 fused_ordering(41) 00:13:44.438 fused_ordering(42) 00:13:44.438 fused_ordering(43) 00:13:44.438 fused_ordering(44) 00:13:44.438 fused_ordering(45) 00:13:44.438 fused_ordering(46) 00:13:44.438 fused_ordering(47) 00:13:44.438 fused_ordering(48) 00:13:44.438 fused_ordering(49) 00:13:44.438 fused_ordering(50) 00:13:44.438 fused_ordering(51) 00:13:44.438 fused_ordering(52) 00:13:44.438 fused_ordering(53) 00:13:44.438 fused_ordering(54) 00:13:44.438 fused_ordering(55) 00:13:44.438 fused_ordering(56) 00:13:44.438 fused_ordering(57) 00:13:44.438 fused_ordering(58) 00:13:44.438 fused_ordering(59) 00:13:44.438 fused_ordering(60) 00:13:44.438 fused_ordering(61) 00:13:44.438 fused_ordering(62) 00:13:44.438 fused_ordering(63) 00:13:44.438 fused_ordering(64) 00:13:44.438 fused_ordering(65) 00:13:44.438 fused_ordering(66) 00:13:44.438 fused_ordering(67) 00:13:44.438 fused_ordering(68) 00:13:44.438 fused_ordering(69) 00:13:44.438 fused_ordering(70) 00:13:44.438 fused_ordering(71) 00:13:44.438 fused_ordering(72) 00:13:44.438 fused_ordering(73) 00:13:44.438 fused_ordering(74) 00:13:44.438 fused_ordering(75) 00:13:44.438 fused_ordering(76) 00:13:44.438 fused_ordering(77) 00:13:44.438 fused_ordering(78) 00:13:44.438 fused_ordering(79) 00:13:44.438 fused_ordering(80) 00:13:44.438 fused_ordering(81) 00:13:44.438 fused_ordering(82) 00:13:44.438 fused_ordering(83) 00:13:44.438 fused_ordering(84) 00:13:44.438 fused_ordering(85) 00:13:44.438 fused_ordering(86) 00:13:44.438 fused_ordering(87) 00:13:44.438 fused_ordering(88) 00:13:44.438 fused_ordering(89) 00:13:44.438 fused_ordering(90) 00:13:44.438 fused_ordering(91) 00:13:44.438 fused_ordering(92) 00:13:44.438 fused_ordering(93) 00:13:44.438 fused_ordering(94) 00:13:44.438 fused_ordering(95) 00:13:44.438 fused_ordering(96) 00:13:44.438 fused_ordering(97) 00:13:44.438 fused_ordering(98) 00:13:44.438 fused_ordering(99) 00:13:44.438 fused_ordering(100) 00:13:44.438 fused_ordering(101) 00:13:44.438 fused_ordering(102) 00:13:44.438 fused_ordering(103) 00:13:44.438 fused_ordering(104) 00:13:44.438 fused_ordering(105) 00:13:44.438 fused_ordering(106) 00:13:44.438 fused_ordering(107) 00:13:44.438 fused_ordering(108) 00:13:44.438 fused_ordering(109) 00:13:44.438 fused_ordering(110) 00:13:44.438 fused_ordering(111) 00:13:44.438 fused_ordering(112) 00:13:44.438 fused_ordering(113) 00:13:44.438 fused_ordering(114) 00:13:44.438 fused_ordering(115) 00:13:44.438 fused_ordering(116) 00:13:44.438 fused_ordering(117) 00:13:44.438 fused_ordering(118) 00:13:44.438 fused_ordering(119) 00:13:44.439 fused_ordering(120) 00:13:44.439 fused_ordering(121) 00:13:44.439 fused_ordering(122) 00:13:44.439 fused_ordering(123) 00:13:44.439 fused_ordering(124) 00:13:44.439 fused_ordering(125) 00:13:44.439 fused_ordering(126) 00:13:44.439 fused_ordering(127) 00:13:44.439 fused_ordering(128) 00:13:44.439 fused_ordering(129) 00:13:44.439 fused_ordering(130) 00:13:44.439 fused_ordering(131) 00:13:44.439 fused_ordering(132) 00:13:44.439 fused_ordering(133) 00:13:44.439 fused_ordering(134) 00:13:44.439 fused_ordering(135) 00:13:44.439 fused_ordering(136) 00:13:44.439 fused_ordering(137) 00:13:44.439 fused_ordering(138) 00:13:44.439 fused_ordering(139) 00:13:44.439 fused_ordering(140) 00:13:44.439 fused_ordering(141) 00:13:44.439 fused_ordering(142) 00:13:44.439 fused_ordering(143) 00:13:44.439 fused_ordering(144) 00:13:44.439 fused_ordering(145) 00:13:44.439 fused_ordering(146) 00:13:44.439 fused_ordering(147) 00:13:44.439 fused_ordering(148) 00:13:44.439 fused_ordering(149) 00:13:44.439 fused_ordering(150) 00:13:44.439 fused_ordering(151) 00:13:44.439 fused_ordering(152) 00:13:44.439 fused_ordering(153) 00:13:44.439 fused_ordering(154) 00:13:44.439 fused_ordering(155) 00:13:44.439 fused_ordering(156) 00:13:44.439 fused_ordering(157) 00:13:44.439 fused_ordering(158) 00:13:44.439 fused_ordering(159) 00:13:44.439 fused_ordering(160) 00:13:44.439 fused_ordering(161) 00:13:44.439 fused_ordering(162) 00:13:44.439 fused_ordering(163) 00:13:44.439 fused_ordering(164) 00:13:44.439 fused_ordering(165) 00:13:44.439 fused_ordering(166) 00:13:44.439 fused_ordering(167) 00:13:44.439 fused_ordering(168) 00:13:44.439 fused_ordering(169) 00:13:44.439 fused_ordering(170) 00:13:44.439 fused_ordering(171) 00:13:44.439 fused_ordering(172) 00:13:44.439 fused_ordering(173) 00:13:44.439 fused_ordering(174) 00:13:44.439 fused_ordering(175) 00:13:44.439 fused_ordering(176) 00:13:44.439 fused_ordering(177) 00:13:44.439 fused_ordering(178) 00:13:44.439 fused_ordering(179) 00:13:44.439 fused_ordering(180) 00:13:44.439 fused_ordering(181) 00:13:44.439 fused_ordering(182) 00:13:44.439 fused_ordering(183) 00:13:44.439 fused_ordering(184) 00:13:44.439 fused_ordering(185) 00:13:44.439 fused_ordering(186) 00:13:44.439 fused_ordering(187) 00:13:44.439 fused_ordering(188) 00:13:44.439 fused_ordering(189) 00:13:44.439 fused_ordering(190) 00:13:44.439 fused_ordering(191) 00:13:44.439 fused_ordering(192) 00:13:44.439 fused_ordering(193) 00:13:44.439 fused_ordering(194) 00:13:44.439 fused_ordering(195) 00:13:44.439 fused_ordering(196) 00:13:44.439 fused_ordering(197) 00:13:44.439 fused_ordering(198) 00:13:44.439 fused_ordering(199) 00:13:44.439 fused_ordering(200) 00:13:44.439 fused_ordering(201) 00:13:44.439 fused_ordering(202) 00:13:44.439 fused_ordering(203) 00:13:44.439 fused_ordering(204) 00:13:44.439 fused_ordering(205) 00:13:45.009 fused_ordering(206) 00:13:45.009 fused_ordering(207) 00:13:45.009 fused_ordering(208) 00:13:45.009 fused_ordering(209) 00:13:45.009 fused_ordering(210) 00:13:45.009 fused_ordering(211) 00:13:45.009 fused_ordering(212) 00:13:45.009 fused_ordering(213) 00:13:45.009 fused_ordering(214) 00:13:45.009 fused_ordering(215) 00:13:45.009 fused_ordering(216) 00:13:45.009 fused_ordering(217) 00:13:45.009 fused_ordering(218) 00:13:45.009 fused_ordering(219) 00:13:45.009 fused_ordering(220) 00:13:45.009 fused_ordering(221) 00:13:45.009 fused_ordering(222) 00:13:45.009 fused_ordering(223) 00:13:45.009 fused_ordering(224) 00:13:45.009 fused_ordering(225) 00:13:45.009 fused_ordering(226) 00:13:45.009 fused_ordering(227) 00:13:45.009 fused_ordering(228) 00:13:45.009 fused_ordering(229) 00:13:45.009 fused_ordering(230) 00:13:45.009 fused_ordering(231) 00:13:45.009 fused_ordering(232) 00:13:45.009 fused_ordering(233) 00:13:45.009 fused_ordering(234) 00:13:45.009 fused_ordering(235) 00:13:45.009 fused_ordering(236) 00:13:45.009 fused_ordering(237) 00:13:45.009 fused_ordering(238) 00:13:45.009 fused_ordering(239) 00:13:45.009 fused_ordering(240) 00:13:45.009 fused_ordering(241) 00:13:45.009 fused_ordering(242) 00:13:45.009 fused_ordering(243) 00:13:45.009 fused_ordering(244) 00:13:45.009 fused_ordering(245) 00:13:45.009 fused_ordering(246) 00:13:45.009 fused_ordering(247) 00:13:45.009 fused_ordering(248) 00:13:45.009 fused_ordering(249) 00:13:45.009 fused_ordering(250) 00:13:45.009 fused_ordering(251) 00:13:45.009 fused_ordering(252) 00:13:45.009 fused_ordering(253) 00:13:45.009 fused_ordering(254) 00:13:45.009 fused_ordering(255) 00:13:45.009 fused_ordering(256) 00:13:45.009 fused_ordering(257) 00:13:45.009 fused_ordering(258) 00:13:45.009 fused_ordering(259) 00:13:45.009 fused_ordering(260) 00:13:45.009 fused_ordering(261) 00:13:45.009 fused_ordering(262) 00:13:45.009 fused_ordering(263) 00:13:45.009 fused_ordering(264) 00:13:45.009 fused_ordering(265) 00:13:45.009 fused_ordering(266) 00:13:45.009 fused_ordering(267) 00:13:45.009 fused_ordering(268) 00:13:45.009 fused_ordering(269) 00:13:45.009 fused_ordering(270) 00:13:45.009 fused_ordering(271) 00:13:45.009 fused_ordering(272) 00:13:45.009 fused_ordering(273) 00:13:45.009 fused_ordering(274) 00:13:45.009 fused_ordering(275) 00:13:45.009 fused_ordering(276) 00:13:45.009 fused_ordering(277) 00:13:45.009 fused_ordering(278) 00:13:45.009 fused_ordering(279) 00:13:45.009 fused_ordering(280) 00:13:45.009 fused_ordering(281) 00:13:45.009 fused_ordering(282) 00:13:45.009 fused_ordering(283) 00:13:45.009 fused_ordering(284) 00:13:45.009 fused_ordering(285) 00:13:45.009 fused_ordering(286) 00:13:45.009 fused_ordering(287) 00:13:45.009 fused_ordering(288) 00:13:45.009 fused_ordering(289) 00:13:45.009 fused_ordering(290) 00:13:45.009 fused_ordering(291) 00:13:45.009 fused_ordering(292) 00:13:45.009 fused_ordering(293) 00:13:45.009 fused_ordering(294) 00:13:45.009 fused_ordering(295) 00:13:45.009 fused_ordering(296) 00:13:45.009 fused_ordering(297) 00:13:45.009 fused_ordering(298) 00:13:45.009 fused_ordering(299) 00:13:45.009 fused_ordering(300) 00:13:45.009 fused_ordering(301) 00:13:45.009 fused_ordering(302) 00:13:45.009 fused_ordering(303) 00:13:45.009 fused_ordering(304) 00:13:45.009 fused_ordering(305) 00:13:45.009 fused_ordering(306) 00:13:45.009 fused_ordering(307) 00:13:45.009 fused_ordering(308) 00:13:45.009 fused_ordering(309) 00:13:45.009 fused_ordering(310) 00:13:45.009 fused_ordering(311) 00:13:45.009 fused_ordering(312) 00:13:45.009 fused_ordering(313) 00:13:45.009 fused_ordering(314) 00:13:45.009 fused_ordering(315) 00:13:45.009 fused_ordering(316) 00:13:45.009 fused_ordering(317) 00:13:45.009 fused_ordering(318) 00:13:45.009 fused_ordering(319) 00:13:45.009 fused_ordering(320) 00:13:45.009 fused_ordering(321) 00:13:45.009 fused_ordering(322) 00:13:45.009 fused_ordering(323) 00:13:45.009 fused_ordering(324) 00:13:45.009 fused_ordering(325) 00:13:45.009 fused_ordering(326) 00:13:45.009 fused_ordering(327) 00:13:45.009 fused_ordering(328) 00:13:45.009 fused_ordering(329) 00:13:45.009 fused_ordering(330) 00:13:45.009 fused_ordering(331) 00:13:45.009 fused_ordering(332) 00:13:45.009 fused_ordering(333) 00:13:45.009 fused_ordering(334) 00:13:45.009 fused_ordering(335) 00:13:45.009 fused_ordering(336) 00:13:45.009 fused_ordering(337) 00:13:45.009 fused_ordering(338) 00:13:45.009 fused_ordering(339) 00:13:45.009 fused_ordering(340) 00:13:45.009 fused_ordering(341) 00:13:45.009 fused_ordering(342) 00:13:45.009 fused_ordering(343) 00:13:45.009 fused_ordering(344) 00:13:45.009 fused_ordering(345) 00:13:45.009 fused_ordering(346) 00:13:45.009 fused_ordering(347) 00:13:45.009 fused_ordering(348) 00:13:45.009 fused_ordering(349) 00:13:45.009 fused_ordering(350) 00:13:45.009 fused_ordering(351) 00:13:45.009 fused_ordering(352) 00:13:45.009 fused_ordering(353) 00:13:45.009 fused_ordering(354) 00:13:45.009 fused_ordering(355) 00:13:45.009 fused_ordering(356) 00:13:45.009 fused_ordering(357) 00:13:45.009 fused_ordering(358) 00:13:45.009 fused_ordering(359) 00:13:45.009 fused_ordering(360) 00:13:45.009 fused_ordering(361) 00:13:45.009 fused_ordering(362) 00:13:45.009 fused_ordering(363) 00:13:45.009 fused_ordering(364) 00:13:45.009 fused_ordering(365) 00:13:45.009 fused_ordering(366) 00:13:45.009 fused_ordering(367) 00:13:45.009 fused_ordering(368) 00:13:45.009 fused_ordering(369) 00:13:45.009 fused_ordering(370) 00:13:45.009 fused_ordering(371) 00:13:45.009 fused_ordering(372) 00:13:45.009 fused_ordering(373) 00:13:45.009 fused_ordering(374) 00:13:45.009 fused_ordering(375) 00:13:45.009 fused_ordering(376) 00:13:45.009 fused_ordering(377) 00:13:45.009 fused_ordering(378) 00:13:45.009 fused_ordering(379) 00:13:45.009 fused_ordering(380) 00:13:45.009 fused_ordering(381) 00:13:45.009 fused_ordering(382) 00:13:45.009 fused_ordering(383) 00:13:45.009 fused_ordering(384) 00:13:45.009 fused_ordering(385) 00:13:45.009 fused_ordering(386) 00:13:45.009 fused_ordering(387) 00:13:45.009 fused_ordering(388) 00:13:45.009 fused_ordering(389) 00:13:45.009 fused_ordering(390) 00:13:45.009 fused_ordering(391) 00:13:45.009 fused_ordering(392) 00:13:45.009 fused_ordering(393) 00:13:45.009 fused_ordering(394) 00:13:45.009 fused_ordering(395) 00:13:45.009 fused_ordering(396) 00:13:45.009 fused_ordering(397) 00:13:45.009 fused_ordering(398) 00:13:45.009 fused_ordering(399) 00:13:45.009 fused_ordering(400) 00:13:45.009 fused_ordering(401) 00:13:45.009 fused_ordering(402) 00:13:45.009 fused_ordering(403) 00:13:45.009 fused_ordering(404) 00:13:45.009 fused_ordering(405) 00:13:45.009 fused_ordering(406) 00:13:45.009 fused_ordering(407) 00:13:45.009 fused_ordering(408) 00:13:45.009 fused_ordering(409) 00:13:45.009 fused_ordering(410) 00:13:45.270 fused_ordering(411) 00:13:45.270 fused_ordering(412) 00:13:45.270 fused_ordering(413) 00:13:45.270 fused_ordering(414) 00:13:45.270 fused_ordering(415) 00:13:45.270 fused_ordering(416) 00:13:45.270 fused_ordering(417) 00:13:45.270 fused_ordering(418) 00:13:45.270 fused_ordering(419) 00:13:45.270 fused_ordering(420) 00:13:45.270 fused_ordering(421) 00:13:45.270 fused_ordering(422) 00:13:45.270 fused_ordering(423) 00:13:45.270 fused_ordering(424) 00:13:45.270 fused_ordering(425) 00:13:45.270 fused_ordering(426) 00:13:45.270 fused_ordering(427) 00:13:45.270 fused_ordering(428) 00:13:45.270 fused_ordering(429) 00:13:45.270 fused_ordering(430) 00:13:45.270 fused_ordering(431) 00:13:45.270 fused_ordering(432) 00:13:45.270 fused_ordering(433) 00:13:45.270 fused_ordering(434) 00:13:45.270 fused_ordering(435) 00:13:45.270 fused_ordering(436) 00:13:45.270 fused_ordering(437) 00:13:45.270 fused_ordering(438) 00:13:45.270 fused_ordering(439) 00:13:45.270 fused_ordering(440) 00:13:45.270 fused_ordering(441) 00:13:45.270 fused_ordering(442) 00:13:45.270 fused_ordering(443) 00:13:45.270 fused_ordering(444) 00:13:45.270 fused_ordering(445) 00:13:45.270 fused_ordering(446) 00:13:45.270 fused_ordering(447) 00:13:45.270 fused_ordering(448) 00:13:45.270 fused_ordering(449) 00:13:45.270 fused_ordering(450) 00:13:45.270 fused_ordering(451) 00:13:45.270 fused_ordering(452) 00:13:45.270 fused_ordering(453) 00:13:45.270 fused_ordering(454) 00:13:45.270 fused_ordering(455) 00:13:45.270 fused_ordering(456) 00:13:45.270 fused_ordering(457) 00:13:45.270 fused_ordering(458) 00:13:45.270 fused_ordering(459) 00:13:45.270 fused_ordering(460) 00:13:45.270 fused_ordering(461) 00:13:45.270 fused_ordering(462) 00:13:45.270 fused_ordering(463) 00:13:45.270 fused_ordering(464) 00:13:45.270 fused_ordering(465) 00:13:45.270 fused_ordering(466) 00:13:45.270 fused_ordering(467) 00:13:45.270 fused_ordering(468) 00:13:45.270 fused_ordering(469) 00:13:45.270 fused_ordering(470) 00:13:45.270 fused_ordering(471) 00:13:45.270 fused_ordering(472) 00:13:45.270 fused_ordering(473) 00:13:45.270 fused_ordering(474) 00:13:45.270 fused_ordering(475) 00:13:45.270 fused_ordering(476) 00:13:45.270 fused_ordering(477) 00:13:45.270 fused_ordering(478) 00:13:45.270 fused_ordering(479) 00:13:45.270 fused_ordering(480) 00:13:45.270 fused_ordering(481) 00:13:45.270 fused_ordering(482) 00:13:45.270 fused_ordering(483) 00:13:45.270 fused_ordering(484) 00:13:45.270 fused_ordering(485) 00:13:45.270 fused_ordering(486) 00:13:45.270 fused_ordering(487) 00:13:45.270 fused_ordering(488) 00:13:45.270 fused_ordering(489) 00:13:45.270 fused_ordering(490) 00:13:45.270 fused_ordering(491) 00:13:45.270 fused_ordering(492) 00:13:45.270 fused_ordering(493) 00:13:45.270 fused_ordering(494) 00:13:45.270 fused_ordering(495) 00:13:45.270 fused_ordering(496) 00:13:45.270 fused_ordering(497) 00:13:45.270 fused_ordering(498) 00:13:45.270 fused_ordering(499) 00:13:45.270 fused_ordering(500) 00:13:45.270 fused_ordering(501) 00:13:45.270 fused_ordering(502) 00:13:45.270 fused_ordering(503) 00:13:45.270 fused_ordering(504) 00:13:45.270 fused_ordering(505) 00:13:45.270 fused_ordering(506) 00:13:45.271 fused_ordering(507) 00:13:45.271 fused_ordering(508) 00:13:45.271 fused_ordering(509) 00:13:45.271 fused_ordering(510) 00:13:45.271 fused_ordering(511) 00:13:45.271 fused_ordering(512) 00:13:45.271 fused_ordering(513) 00:13:45.271 fused_ordering(514) 00:13:45.271 fused_ordering(515) 00:13:45.271 fused_ordering(516) 00:13:45.271 fused_ordering(517) 00:13:45.271 fused_ordering(518) 00:13:45.271 fused_ordering(519) 00:13:45.271 fused_ordering(520) 00:13:45.271 fused_ordering(521) 00:13:45.271 fused_ordering(522) 00:13:45.271 fused_ordering(523) 00:13:45.271 fused_ordering(524) 00:13:45.271 fused_ordering(525) 00:13:45.271 fused_ordering(526) 00:13:45.271 fused_ordering(527) 00:13:45.271 fused_ordering(528) 00:13:45.271 fused_ordering(529) 00:13:45.271 fused_ordering(530) 00:13:45.271 fused_ordering(531) 00:13:45.271 fused_ordering(532) 00:13:45.271 fused_ordering(533) 00:13:45.271 fused_ordering(534) 00:13:45.271 fused_ordering(535) 00:13:45.271 fused_ordering(536) 00:13:45.271 fused_ordering(537) 00:13:45.271 fused_ordering(538) 00:13:45.271 fused_ordering(539) 00:13:45.271 fused_ordering(540) 00:13:45.271 fused_ordering(541) 00:13:45.271 fused_ordering(542) 00:13:45.271 fused_ordering(543) 00:13:45.271 fused_ordering(544) 00:13:45.271 fused_ordering(545) 00:13:45.271 fused_ordering(546) 00:13:45.271 fused_ordering(547) 00:13:45.271 fused_ordering(548) 00:13:45.271 fused_ordering(549) 00:13:45.271 fused_ordering(550) 00:13:45.271 fused_ordering(551) 00:13:45.271 fused_ordering(552) 00:13:45.271 fused_ordering(553) 00:13:45.271 fused_ordering(554) 00:13:45.271 fused_ordering(555) 00:13:45.271 fused_ordering(556) 00:13:45.271 fused_ordering(557) 00:13:45.271 fused_ordering(558) 00:13:45.271 fused_ordering(559) 00:13:45.271 fused_ordering(560) 00:13:45.271 fused_ordering(561) 00:13:45.271 fused_ordering(562) 00:13:45.271 fused_ordering(563) 00:13:45.271 fused_ordering(564) 00:13:45.271 fused_ordering(565) 00:13:45.271 fused_ordering(566) 00:13:45.271 fused_ordering(567) 00:13:45.271 fused_ordering(568) 00:13:45.271 fused_ordering(569) 00:13:45.271 fused_ordering(570) 00:13:45.271 fused_ordering(571) 00:13:45.271 fused_ordering(572) 00:13:45.271 fused_ordering(573) 00:13:45.271 fused_ordering(574) 00:13:45.271 fused_ordering(575) 00:13:45.271 fused_ordering(576) 00:13:45.271 fused_ordering(577) 00:13:45.271 fused_ordering(578) 00:13:45.271 fused_ordering(579) 00:13:45.271 fused_ordering(580) 00:13:45.271 fused_ordering(581) 00:13:45.271 fused_ordering(582) 00:13:45.271 fused_ordering(583) 00:13:45.271 fused_ordering(584) 00:13:45.271 fused_ordering(585) 00:13:45.271 fused_ordering(586) 00:13:45.271 fused_ordering(587) 00:13:45.271 fused_ordering(588) 00:13:45.271 fused_ordering(589) 00:13:45.271 fused_ordering(590) 00:13:45.271 fused_ordering(591) 00:13:45.271 fused_ordering(592) 00:13:45.271 fused_ordering(593) 00:13:45.271 fused_ordering(594) 00:13:45.271 fused_ordering(595) 00:13:45.271 fused_ordering(596) 00:13:45.271 fused_ordering(597) 00:13:45.271 fused_ordering(598) 00:13:45.271 fused_ordering(599) 00:13:45.271 fused_ordering(600) 00:13:45.271 fused_ordering(601) 00:13:45.271 fused_ordering(602) 00:13:45.271 fused_ordering(603) 00:13:45.271 fused_ordering(604) 00:13:45.271 fused_ordering(605) 00:13:45.271 fused_ordering(606) 00:13:45.271 fused_ordering(607) 00:13:45.271 fused_ordering(608) 00:13:45.271 fused_ordering(609) 00:13:45.271 fused_ordering(610) 00:13:45.271 fused_ordering(611) 00:13:45.271 fused_ordering(612) 00:13:45.271 fused_ordering(613) 00:13:45.271 fused_ordering(614) 00:13:45.271 fused_ordering(615) 00:13:45.842 fused_ordering(616) 00:13:45.842 fused_ordering(617) 00:13:45.842 fused_ordering(618) 00:13:45.842 fused_ordering(619) 00:13:45.842 fused_ordering(620) 00:13:45.842 fused_ordering(621) 00:13:45.842 fused_ordering(622) 00:13:45.842 fused_ordering(623) 00:13:45.842 fused_ordering(624) 00:13:45.842 fused_ordering(625) 00:13:45.842 fused_ordering(626) 00:13:45.842 fused_ordering(627) 00:13:45.842 fused_ordering(628) 00:13:45.842 fused_ordering(629) 00:13:45.842 fused_ordering(630) 00:13:45.842 fused_ordering(631) 00:13:45.842 fused_ordering(632) 00:13:45.842 fused_ordering(633) 00:13:45.842 fused_ordering(634) 00:13:45.842 fused_ordering(635) 00:13:45.842 fused_ordering(636) 00:13:45.842 fused_ordering(637) 00:13:45.842 fused_ordering(638) 00:13:45.842 fused_ordering(639) 00:13:45.842 fused_ordering(640) 00:13:45.842 fused_ordering(641) 00:13:45.842 fused_ordering(642) 00:13:45.842 fused_ordering(643) 00:13:45.842 fused_ordering(644) 00:13:45.842 fused_ordering(645) 00:13:45.842 fused_ordering(646) 00:13:45.842 fused_ordering(647) 00:13:45.842 fused_ordering(648) 00:13:45.842 fused_ordering(649) 00:13:45.842 fused_ordering(650) 00:13:45.842 fused_ordering(651) 00:13:45.842 fused_ordering(652) 00:13:45.842 fused_ordering(653) 00:13:45.842 fused_ordering(654) 00:13:45.842 fused_ordering(655) 00:13:45.842 fused_ordering(656) 00:13:45.842 fused_ordering(657) 00:13:45.842 fused_ordering(658) 00:13:45.842 fused_ordering(659) 00:13:45.842 fused_ordering(660) 00:13:45.842 fused_ordering(661) 00:13:45.842 fused_ordering(662) 00:13:45.842 fused_ordering(663) 00:13:45.842 fused_ordering(664) 00:13:45.842 fused_ordering(665) 00:13:45.842 fused_ordering(666) 00:13:45.842 fused_ordering(667) 00:13:45.842 fused_ordering(668) 00:13:45.842 fused_ordering(669) 00:13:45.842 fused_ordering(670) 00:13:45.842 fused_ordering(671) 00:13:45.842 fused_ordering(672) 00:13:45.842 fused_ordering(673) 00:13:45.842 fused_ordering(674) 00:13:45.842 fused_ordering(675) 00:13:45.842 fused_ordering(676) 00:13:45.842 fused_ordering(677) 00:13:45.842 fused_ordering(678) 00:13:45.842 fused_ordering(679) 00:13:45.842 fused_ordering(680) 00:13:45.842 fused_ordering(681) 00:13:45.842 fused_ordering(682) 00:13:45.842 fused_ordering(683) 00:13:45.842 fused_ordering(684) 00:13:45.842 fused_ordering(685) 00:13:45.842 fused_ordering(686) 00:13:45.842 fused_ordering(687) 00:13:45.842 fused_ordering(688) 00:13:45.842 fused_ordering(689) 00:13:45.842 fused_ordering(690) 00:13:45.842 fused_ordering(691) 00:13:45.842 fused_ordering(692) 00:13:45.842 fused_ordering(693) 00:13:45.842 fused_ordering(694) 00:13:45.842 fused_ordering(695) 00:13:45.842 fused_ordering(696) 00:13:45.842 fused_ordering(697) 00:13:45.842 fused_ordering(698) 00:13:45.842 fused_ordering(699) 00:13:45.842 fused_ordering(700) 00:13:45.842 fused_ordering(701) 00:13:45.842 fused_ordering(702) 00:13:45.842 fused_ordering(703) 00:13:45.842 fused_ordering(704) 00:13:45.842 fused_ordering(705) 00:13:45.842 fused_ordering(706) 00:13:45.842 fused_ordering(707) 00:13:45.842 fused_ordering(708) 00:13:45.842 fused_ordering(709) 00:13:45.842 fused_ordering(710) 00:13:45.842 fused_ordering(711) 00:13:45.842 fused_ordering(712) 00:13:45.842 fused_ordering(713) 00:13:45.842 fused_ordering(714) 00:13:45.842 fused_ordering(715) 00:13:45.842 fused_ordering(716) 00:13:45.842 fused_ordering(717) 00:13:45.842 fused_ordering(718) 00:13:45.842 fused_ordering(719) 00:13:45.842 fused_ordering(720) 00:13:45.842 fused_ordering(721) 00:13:45.842 fused_ordering(722) 00:13:45.842 fused_ordering(723) 00:13:45.842 fused_ordering(724) 00:13:45.842 fused_ordering(725) 00:13:45.842 fused_ordering(726) 00:13:45.842 fused_ordering(727) 00:13:45.842 fused_ordering(728) 00:13:45.842 fused_ordering(729) 00:13:45.842 fused_ordering(730) 00:13:45.842 fused_ordering(731) 00:13:45.842 fused_ordering(732) 00:13:45.842 fused_ordering(733) 00:13:45.842 fused_ordering(734) 00:13:45.842 fused_ordering(735) 00:13:45.842 fused_ordering(736) 00:13:45.842 fused_ordering(737) 00:13:45.842 fused_ordering(738) 00:13:45.842 fused_ordering(739) 00:13:45.842 fused_ordering(740) 00:13:45.842 fused_ordering(741) 00:13:45.842 fused_ordering(742) 00:13:45.842 fused_ordering(743) 00:13:45.842 fused_ordering(744) 00:13:45.842 fused_ordering(745) 00:13:45.842 fused_ordering(746) 00:13:45.842 fused_ordering(747) 00:13:45.842 fused_ordering(748) 00:13:45.842 fused_ordering(749) 00:13:45.842 fused_ordering(750) 00:13:45.842 fused_ordering(751) 00:13:45.842 fused_ordering(752) 00:13:45.842 fused_ordering(753) 00:13:45.842 fused_ordering(754) 00:13:45.842 fused_ordering(755) 00:13:45.842 fused_ordering(756) 00:13:45.842 fused_ordering(757) 00:13:45.842 fused_ordering(758) 00:13:45.842 fused_ordering(759) 00:13:45.842 fused_ordering(760) 00:13:45.842 fused_ordering(761) 00:13:45.842 fused_ordering(762) 00:13:45.842 fused_ordering(763) 00:13:45.842 fused_ordering(764) 00:13:45.842 fused_ordering(765) 00:13:45.842 fused_ordering(766) 00:13:45.842 fused_ordering(767) 00:13:45.842 fused_ordering(768) 00:13:45.842 fused_ordering(769) 00:13:45.842 fused_ordering(770) 00:13:45.842 fused_ordering(771) 00:13:45.842 fused_ordering(772) 00:13:45.842 fused_ordering(773) 00:13:45.842 fused_ordering(774) 00:13:45.842 fused_ordering(775) 00:13:45.842 fused_ordering(776) 00:13:45.842 fused_ordering(777) 00:13:45.842 fused_ordering(778) 00:13:45.842 fused_ordering(779) 00:13:45.842 fused_ordering(780) 00:13:45.842 fused_ordering(781) 00:13:45.842 fused_ordering(782) 00:13:45.842 fused_ordering(783) 00:13:45.842 fused_ordering(784) 00:13:45.842 fused_ordering(785) 00:13:45.842 fused_ordering(786) 00:13:45.842 fused_ordering(787) 00:13:45.842 fused_ordering(788) 00:13:45.842 fused_ordering(789) 00:13:45.842 fused_ordering(790) 00:13:45.842 fused_ordering(791) 00:13:45.842 fused_ordering(792) 00:13:45.842 fused_ordering(793) 00:13:45.842 fused_ordering(794) 00:13:45.842 fused_ordering(795) 00:13:45.842 fused_ordering(796) 00:13:45.842 fused_ordering(797) 00:13:45.842 fused_ordering(798) 00:13:45.842 fused_ordering(799) 00:13:45.842 fused_ordering(800) 00:13:45.842 fused_ordering(801) 00:13:45.842 fused_ordering(802) 00:13:45.842 fused_ordering(803) 00:13:45.842 fused_ordering(804) 00:13:45.842 fused_ordering(805) 00:13:45.842 fused_ordering(806) 00:13:45.842 fused_ordering(807) 00:13:45.842 fused_ordering(808) 00:13:45.842 fused_ordering(809) 00:13:45.843 fused_ordering(810) 00:13:45.843 fused_ordering(811) 00:13:45.843 fused_ordering(812) 00:13:45.843 fused_ordering(813) 00:13:45.843 fused_ordering(814) 00:13:45.843 fused_ordering(815) 00:13:45.843 fused_ordering(816) 00:13:45.843 fused_ordering(817) 00:13:45.843 fused_ordering(818) 00:13:45.843 fused_ordering(819) 00:13:45.843 fused_ordering(820) 00:13:46.414 fused_ordering(821) 00:13:46.414 fused_ordering(822) 00:13:46.414 fused_ordering(823) 00:13:46.414 fused_ordering(824) 00:13:46.414 fused_ordering(825) 00:13:46.414 fused_ordering(826) 00:13:46.414 fused_ordering(827) 00:13:46.414 fused_ordering(828) 00:13:46.414 fused_ordering(829) 00:13:46.414 fused_ordering(830) 00:13:46.414 fused_ordering(831) 00:13:46.414 fused_ordering(832) 00:13:46.414 fused_ordering(833) 00:13:46.414 fused_ordering(834) 00:13:46.414 fused_ordering(835) 00:13:46.414 fused_ordering(836) 00:13:46.414 fused_ordering(837) 00:13:46.414 fused_ordering(838) 00:13:46.414 fused_ordering(839) 00:13:46.414 fused_ordering(840) 00:13:46.414 fused_ordering(841) 00:13:46.414 fused_ordering(842) 00:13:46.414 fused_ordering(843) 00:13:46.414 fused_ordering(844) 00:13:46.414 fused_ordering(845) 00:13:46.414 fused_ordering(846) 00:13:46.414 fused_ordering(847) 00:13:46.414 fused_ordering(848) 00:13:46.414 fused_ordering(849) 00:13:46.414 fused_ordering(850) 00:13:46.414 fused_ordering(851) 00:13:46.414 fused_ordering(852) 00:13:46.414 fused_ordering(853) 00:13:46.414 fused_ordering(854) 00:13:46.414 fused_ordering(855) 00:13:46.414 fused_ordering(856) 00:13:46.414 fused_ordering(857) 00:13:46.414 fused_ordering(858) 00:13:46.414 fused_ordering(859) 00:13:46.414 fused_ordering(860) 00:13:46.414 fused_ordering(861) 00:13:46.414 fused_ordering(862) 00:13:46.414 fused_ordering(863) 00:13:46.414 fused_ordering(864) 00:13:46.414 fused_ordering(865) 00:13:46.414 fused_ordering(866) 00:13:46.414 fused_ordering(867) 00:13:46.414 fused_ordering(868) 00:13:46.414 fused_ordering(869) 00:13:46.414 fused_ordering(870) 00:13:46.414 fused_ordering(871) 00:13:46.414 fused_ordering(872) 00:13:46.414 fused_ordering(873) 00:13:46.414 fused_ordering(874) 00:13:46.414 fused_ordering(875) 00:13:46.414 fused_ordering(876) 00:13:46.414 fused_ordering(877) 00:13:46.414 fused_ordering(878) 00:13:46.414 fused_ordering(879) 00:13:46.414 fused_ordering(880) 00:13:46.414 fused_ordering(881) 00:13:46.414 fused_ordering(882) 00:13:46.414 fused_ordering(883) 00:13:46.414 fused_ordering(884) 00:13:46.414 fused_ordering(885) 00:13:46.414 fused_ordering(886) 00:13:46.414 fused_ordering(887) 00:13:46.414 fused_ordering(888) 00:13:46.414 fused_ordering(889) 00:13:46.414 fused_ordering(890) 00:13:46.414 fused_ordering(891) 00:13:46.414 fused_ordering(892) 00:13:46.414 fused_ordering(893) 00:13:46.414 fused_ordering(894) 00:13:46.414 fused_ordering(895) 00:13:46.414 fused_ordering(896) 00:13:46.414 fused_ordering(897) 00:13:46.414 fused_ordering(898) 00:13:46.414 fused_ordering(899) 00:13:46.414 fused_ordering(900) 00:13:46.414 fused_ordering(901) 00:13:46.414 fused_ordering(902) 00:13:46.414 fused_ordering(903) 00:13:46.414 fused_ordering(904) 00:13:46.414 fused_ordering(905) 00:13:46.414 fused_ordering(906) 00:13:46.414 fused_ordering(907) 00:13:46.414 fused_ordering(908) 00:13:46.414 fused_ordering(909) 00:13:46.414 fused_ordering(910) 00:13:46.414 fused_ordering(911) 00:13:46.414 fused_ordering(912) 00:13:46.414 fused_ordering(913) 00:13:46.414 fused_ordering(914) 00:13:46.414 fused_ordering(915) 00:13:46.414 fused_ordering(916) 00:13:46.414 fused_ordering(917) 00:13:46.414 fused_ordering(918) 00:13:46.414 fused_ordering(919) 00:13:46.414 fused_ordering(920) 00:13:46.414 fused_ordering(921) 00:13:46.414 fused_ordering(922) 00:13:46.414 fused_ordering(923) 00:13:46.414 fused_ordering(924) 00:13:46.414 fused_ordering(925) 00:13:46.414 fused_ordering(926) 00:13:46.414 fused_ordering(927) 00:13:46.414 fused_ordering(928) 00:13:46.414 fused_ordering(929) 00:13:46.414 fused_ordering(930) 00:13:46.414 fused_ordering(931) 00:13:46.414 fused_ordering(932) 00:13:46.414 fused_ordering(933) 00:13:46.414 fused_ordering(934) 00:13:46.414 fused_ordering(935) 00:13:46.414 fused_ordering(936) 00:13:46.414 fused_ordering(937) 00:13:46.414 fused_ordering(938) 00:13:46.414 fused_ordering(939) 00:13:46.414 fused_ordering(940) 00:13:46.414 fused_ordering(941) 00:13:46.414 fused_ordering(942) 00:13:46.414 fused_ordering(943) 00:13:46.414 fused_ordering(944) 00:13:46.414 fused_ordering(945) 00:13:46.414 fused_ordering(946) 00:13:46.414 fused_ordering(947) 00:13:46.414 fused_ordering(948) 00:13:46.414 fused_ordering(949) 00:13:46.414 fused_ordering(950) 00:13:46.414 fused_ordering(951) 00:13:46.414 fused_ordering(952) 00:13:46.414 fused_ordering(953) 00:13:46.414 fused_ordering(954) 00:13:46.414 fused_ordering(955) 00:13:46.414 fused_ordering(956) 00:13:46.414 fused_ordering(957) 00:13:46.414 fused_ordering(958) 00:13:46.414 fused_ordering(959) 00:13:46.414 fused_ordering(960) 00:13:46.414 fused_ordering(961) 00:13:46.414 fused_ordering(962) 00:13:46.414 fused_ordering(963) 00:13:46.414 fused_ordering(964) 00:13:46.414 fused_ordering(965) 00:13:46.414 fused_ordering(966) 00:13:46.414 fused_ordering(967) 00:13:46.414 fused_ordering(968) 00:13:46.414 fused_ordering(969) 00:13:46.414 fused_ordering(970) 00:13:46.414 fused_ordering(971) 00:13:46.414 fused_ordering(972) 00:13:46.414 fused_ordering(973) 00:13:46.414 fused_ordering(974) 00:13:46.414 fused_ordering(975) 00:13:46.414 fused_ordering(976) 00:13:46.414 fused_ordering(977) 00:13:46.414 fused_ordering(978) 00:13:46.414 fused_ordering(979) 00:13:46.414 fused_ordering(980) 00:13:46.414 fused_ordering(981) 00:13:46.414 fused_ordering(982) 00:13:46.414 fused_ordering(983) 00:13:46.414 fused_ordering(984) 00:13:46.414 fused_ordering(985) 00:13:46.414 fused_ordering(986) 00:13:46.414 fused_ordering(987) 00:13:46.414 fused_ordering(988) 00:13:46.414 fused_ordering(989) 00:13:46.414 fused_ordering(990) 00:13:46.414 fused_ordering(991) 00:13:46.414 fused_ordering(992) 00:13:46.414 fused_ordering(993) 00:13:46.414 fused_ordering(994) 00:13:46.414 fused_ordering(995) 00:13:46.414 fused_ordering(996) 00:13:46.414 fused_ordering(997) 00:13:46.414 fused_ordering(998) 00:13:46.414 fused_ordering(999) 00:13:46.414 fused_ordering(1000) 00:13:46.414 fused_ordering(1001) 00:13:46.414 fused_ordering(1002) 00:13:46.414 fused_ordering(1003) 00:13:46.414 fused_ordering(1004) 00:13:46.414 fused_ordering(1005) 00:13:46.414 fused_ordering(1006) 00:13:46.414 fused_ordering(1007) 00:13:46.414 fused_ordering(1008) 00:13:46.414 fused_ordering(1009) 00:13:46.414 fused_ordering(1010) 00:13:46.414 fused_ordering(1011) 00:13:46.414 fused_ordering(1012) 00:13:46.414 fused_ordering(1013) 00:13:46.414 fused_ordering(1014) 00:13:46.414 fused_ordering(1015) 00:13:46.414 fused_ordering(1016) 00:13:46.414 fused_ordering(1017) 00:13:46.414 fused_ordering(1018) 00:13:46.414 fused_ordering(1019) 00:13:46.414 fused_ordering(1020) 00:13:46.414 fused_ordering(1021) 00:13:46.414 fused_ordering(1022) 00:13:46.414 fused_ordering(1023) 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.414 rmmod nvme_tcp 00:13:46.414 rmmod nvme_fabrics 00:13:46.414 rmmod nvme_keyring 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3190050 ']' 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3190050 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3190050 ']' 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3190050 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3190050 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3190050' 00:13:46.414 killing process with pid 3190050 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3190050 00:13:46.414 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3190050 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.675 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.605 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.605 00:13:48.605 real 0m13.253s 00:13:48.605 user 0m6.997s 00:13:48.605 sys 0m6.958s 00:13:48.605 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:48.605 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.605 ************************************ 00:13:48.605 END TEST nvmf_fused_ordering 00:13:48.605 ************************************ 00:13:48.605 10:55:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:48.605 10:55:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:48.605 10:55:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.605 10:55:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.867 ************************************ 00:13:48.867 START TEST nvmf_ns_masking 00:13:48.867 ************************************ 00:13:48.867 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:48.867 * Looking for test storage... 00:13:48.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.868 --rc genhtml_branch_coverage=1 00:13:48.868 --rc genhtml_function_coverage=1 00:13:48.868 --rc genhtml_legend=1 00:13:48.868 --rc geninfo_all_blocks=1 00:13:48.868 --rc geninfo_unexecuted_blocks=1 00:13:48.868 00:13:48.868 ' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.868 --rc genhtml_branch_coverage=1 00:13:48.868 --rc genhtml_function_coverage=1 00:13:48.868 --rc genhtml_legend=1 00:13:48.868 --rc geninfo_all_blocks=1 00:13:48.868 --rc geninfo_unexecuted_blocks=1 00:13:48.868 00:13:48.868 ' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.868 --rc genhtml_branch_coverage=1 00:13:48.868 --rc genhtml_function_coverage=1 00:13:48.868 --rc genhtml_legend=1 00:13:48.868 --rc geninfo_all_blocks=1 00:13:48.868 --rc geninfo_unexecuted_blocks=1 00:13:48.868 00:13:48.868 ' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.868 --rc genhtml_branch_coverage=1 00:13:48.868 --rc genhtml_function_coverage=1 00:13:48.868 --rc genhtml_legend=1 00:13:48.868 --rc geninfo_all_blocks=1 00:13:48.868 --rc geninfo_unexecuted_blocks=1 00:13:48.868 00:13:48.868 ' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.868 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=874acea3-6877-437f-a465-af4d0adc76de 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=46a5b356-0d57-4b4d-939c-c81e0f1ada9a 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ec53a10c-e0a4-4e7f-abab-a1cb367ea6bb 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.869 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.460 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:55.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:55.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:55.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:55.461 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.461 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.723 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.723 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.723 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.723 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:13:55.723 00:13:55.723 --- 10.0.0.2 ping statistics --- 00:13:55.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.723 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:13:55.723 00:13:55.723 --- 10.0.0.1 ping statistics --- 00:13:55.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.723 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3194896 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3194896 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3194896 ']' 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:55.723 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:55.983 [2024-11-06 10:55:47.191425] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:13:55.983 [2024-11-06 10:55:47.191517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.983 [2024-11-06 10:55:47.275193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.983 [2024-11-06 10:55:47.316168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.983 [2024-11-06 10:55:47.316206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.983 [2024-11-06 10:55:47.316214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.983 [2024-11-06 10:55:47.316220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.983 [2024-11-06 10:55:47.316226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.983 [2024-11-06 10:55:47.316846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.553 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:56.553 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:56.553 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:56.553 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.553 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:56.815 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.815 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.815 [2024-11-06 10:55:48.150260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.815 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:56.815 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:56.815 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:57.075 Malloc1 00:13:57.075 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:57.337 Malloc2 00:13:57.337 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:57.337 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:57.598 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.598 [2024-11-06 10:55:48.966131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.598 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:57.598 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ec53a10c-e0a4-4e7f-abab-a1cb367ea6bb -a 10.0.0.2 -s 4420 -i 4 00:13:57.858 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:57.858 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:57.858 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.858 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:57.858 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:59.779 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.040 [ 0]:0x1 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83560200751d455e84a0b607d8018fd9 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83560200751d455e84a0b607d8018fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.040 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.301 [ 0]:0x1 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83560200751d455e84a0b607d8018fd9 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83560200751d455e84a0b607d8018fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.301 [ 1]:0x2 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf3383fa610a4602a7814fb2169e544e 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf3383fa610a4602a7814fb2169e544e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:00.301 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.562 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.562 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ec53a10c-e0a4-4e7f-abab-a1cb367ea6bb -a 10.0.0.2 -s 4420 -i 4 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:00.823 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:03.369 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.370 [ 0]:0x2 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf3383fa610a4602a7814fb2169e544e 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf3383fa610a4602a7814fb2169e544e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.370 [ 0]:0x1 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83560200751d455e84a0b607d8018fd9 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83560200751d455e84a0b607d8018fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.370 [ 1]:0x2 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf3383fa610a4602a7814fb2169e544e 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf3383fa610a4602a7814fb2169e544e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.370 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.631 [ 0]:0x2 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf3383fa610a4602a7814fb2169e544e 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf3383fa610a4602a7814fb2169e544e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:03.631 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.631 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.891 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:03.891 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ec53a10c-e0a4-4e7f-abab-a1cb367ea6bb -a 10.0.0.2 -s 4420 -i 4 00:14:04.153 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:04.153 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:04.153 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.153 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:04.153 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:04.153 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.067 [ 0]:0x1 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.067 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.329 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83560200751d455e84a0b607d8018fd9 00:14:06.329 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83560200751d455e84a0b607d8018fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.329 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:06.329 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.329 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.330 [ 1]:0x2 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf3383fa610a4602a7814fb2169e544e 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf3383fa610a4602a7814fb2169e544e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.330 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.591 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:06.591 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.591 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:06.591 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:06.591 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.592 [ 0]:0x2 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf3383fa610a4602a7814fb2169e544e 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf3383fa610a4602a7814fb2169e544e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:06.592 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:06.592 [2024-11-06 10:55:57.976312] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:06.592 request: 00:14:06.592 { 00:14:06.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.592 "nsid": 2, 00:14:06.592 "host": "nqn.2016-06.io.spdk:host1", 00:14:06.592 "method": "nvmf_ns_remove_host", 00:14:06.592 "req_id": 1 00:14:06.592 } 00:14:06.592 Got JSON-RPC error response 00:14:06.592 response: 00:14:06.592 { 00:14:06.592 "code": -32602, 00:14:06.592 "message": "Invalid parameters" 00:14:06.592 } 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:06.592 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.853 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.853 [ 0]:0x2 00:14:06.854 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.854 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.854 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf3383fa610a4602a7814fb2169e544e 00:14:06.854 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf3383fa610a4602a7814fb2169e544e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.854 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:06.854 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3197077 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3197077 /var/tmp/host.sock 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3197077 ']' 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:07.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.115 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:07.115 [2024-11-06 10:55:58.369938] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:14:07.115 [2024-11-06 10:55:58.369989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197077 ] 00:14:07.115 [2024-11-06 10:55:58.457474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.115 [2024-11-06 10:55:58.494137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.058 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:08.058 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:08.058 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.058 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.319 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 874acea3-6877-437f-a465-af4d0adc76de 00:14:08.319 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:08.319 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 874ACEA36877437FA465AF4D0ADC76DE -i 00:14:08.319 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 46a5b356-0d57-4b4d-939c-c81e0f1ada9a 00:14:08.319 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:08.319 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 46A5B3560D574B4D939CC81E0F1ADA9A -i 00:14:08.581 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.581 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:08.842 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:08.842 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:09.103 nvme0n1 00:14:09.103 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:09.103 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:09.364 nvme1n2 00:14:09.364 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:09.364 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:09.364 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:09.364 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:09.364 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:09.625 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:09.625 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:09.625 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:09.625 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:09.885 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 874acea3-6877-437f-a465-af4d0adc76de == \8\7\4\a\c\e\a\3\-\6\8\7\7\-\4\3\7\f\-\a\4\6\5\-\a\f\4\d\0\a\d\c\7\6\d\e ]] 00:14:09.885 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:09.885 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:09.885 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:09.885 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 46a5b356-0d57-4b4d-939c-c81e0f1ada9a == \4\6\a\5\b\3\5\6\-\0\d\5\7\-\4\b\4\d\-\9\3\9\c\-\c\8\1\e\0\f\1\a\d\a\9\a ]] 00:14:09.885 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.146 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.146 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 874acea3-6877-437f-a465-af4d0adc76de 00:14:10.146 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:10.146 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 874ACEA36877437FA465AF4D0ADC76DE 00:14:10.146 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.146 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 874ACEA36877437FA465AF4D0ADC76DE 00:14:10.146 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:10.406 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 874ACEA36877437FA465AF4D0ADC76DE 00:14:10.406 [2024-11-06 10:56:01.727228] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:10.407 [2024-11-06 10:56:01.727261] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:10.407 [2024-11-06 10:56:01.727271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.407 request: 00:14:10.407 { 00:14:10.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.407 "namespace": { 00:14:10.407 "bdev_name": "invalid", 00:14:10.407 "nsid": 1, 00:14:10.407 "nguid": "874ACEA36877437FA465AF4D0ADC76DE", 00:14:10.407 "no_auto_visible": false 00:14:10.407 }, 00:14:10.407 "method": "nvmf_subsystem_add_ns", 00:14:10.407 "req_id": 1 00:14:10.407 } 00:14:10.407 Got JSON-RPC error response 00:14:10.407 response: 00:14:10.407 { 00:14:10.407 "code": -32602, 00:14:10.407 "message": "Invalid parameters" 00:14:10.407 } 00:14:10.407 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.407 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.407 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.407 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.407 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 874acea3-6877-437f-a465-af4d0adc76de 00:14:10.407 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:10.407 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 874ACEA36877437FA465AF4D0ADC76DE -i 00:14:10.667 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:12.577 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:12.577 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:12.577 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3197077 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3197077 ']' 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3197077 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3197077 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3197077' 00:14:12.838 killing process with pid 3197077 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3197077 00:14:12.838 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3197077 00:14:13.099 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.360 rmmod nvme_tcp 00:14:13.360 rmmod nvme_fabrics 00:14:13.360 rmmod nvme_keyring 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3194896 ']' 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3194896 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3194896 ']' 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3194896 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3194896 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:13.360 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:13.361 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3194896' 00:14:13.361 killing process with pid 3194896 00:14:13.361 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3194896 00:14:13.361 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3194896 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.622 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:15.535 00:14:15.535 real 0m26.862s 00:14:15.535 user 0m30.501s 00:14:15.535 sys 0m7.638s 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.535 ************************************ 00:14:15.535 END TEST nvmf_ns_masking 00:14:15.535 ************************************ 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:15.535 10:56:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.797 ************************************ 00:14:15.797 START TEST nvmf_nvme_cli 00:14:15.797 ************************************ 00:14:15.797 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:15.797 * Looking for test storage... 00:14:15.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.797 --rc genhtml_branch_coverage=1 00:14:15.797 --rc genhtml_function_coverage=1 00:14:15.797 --rc genhtml_legend=1 00:14:15.797 --rc geninfo_all_blocks=1 00:14:15.797 --rc geninfo_unexecuted_blocks=1 00:14:15.797 00:14:15.797 ' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.797 --rc genhtml_branch_coverage=1 00:14:15.797 --rc genhtml_function_coverage=1 00:14:15.797 --rc genhtml_legend=1 00:14:15.797 --rc geninfo_all_blocks=1 00:14:15.797 --rc geninfo_unexecuted_blocks=1 00:14:15.797 00:14:15.797 ' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.797 --rc genhtml_branch_coverage=1 00:14:15.797 --rc genhtml_function_coverage=1 00:14:15.797 --rc genhtml_legend=1 00:14:15.797 --rc geninfo_all_blocks=1 00:14:15.797 --rc geninfo_unexecuted_blocks=1 00:14:15.797 00:14:15.797 ' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.797 --rc genhtml_branch_coverage=1 00:14:15.797 --rc genhtml_function_coverage=1 00:14:15.797 --rc genhtml_legend=1 00:14:15.797 --rc geninfo_all_blocks=1 00:14:15.797 --rc geninfo_unexecuted_blocks=1 00:14:15.797 00:14:15.797 ' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.797 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.798 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:16.059 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:22.811 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:22.811 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:22.811 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:22.812 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:22.812 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:22.812 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:23.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:14:23.073 00:14:23.073 --- 10.0.0.2 ping statistics --- 00:14:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.073 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:14:23.073 00:14:23.073 --- 10.0.0.1 ping statistics --- 00:14:23.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.073 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3202611 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3202611 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3202611 ']' 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:23.073 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.073 [2024-11-06 10:56:14.485966] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:14:23.073 [2024-11-06 10:56:14.486035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.333 [2024-11-06 10:56:14.569811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.334 [2024-11-06 10:56:14.613396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.334 [2024-11-06 10:56:14.613434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.334 [2024-11-06 10:56:14.613446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.334 [2024-11-06 10:56:14.613453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.334 [2024-11-06 10:56:14.613459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.334 [2024-11-06 10:56:14.615070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.334 [2024-11-06 10:56:14.615186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.334 [2024-11-06 10:56:14.615343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.334 [2024-11-06 10:56:14.615343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.904 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.904 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:23.904 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.904 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.904 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.164 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.164 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:24.164 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 [2024-11-06 10:56:15.342342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 Malloc0 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 Malloc1 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 [2024-11-06 10:56:15.441578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.165 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:24.426 00:14:24.426 Discovery Log Number of Records 2, Generation counter 2 00:14:24.426 =====Discovery Log Entry 0====== 00:14:24.426 trtype: tcp 00:14:24.426 adrfam: ipv4 00:14:24.426 subtype: current discovery subsystem 00:14:24.426 treq: not required 00:14:24.426 portid: 0 00:14:24.426 trsvcid: 4420 00:14:24.426 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:24.426 traddr: 10.0.0.2 00:14:24.426 eflags: explicit discovery connections, duplicate discovery information 00:14:24.426 sectype: none 00:14:24.426 =====Discovery Log Entry 1====== 00:14:24.426 trtype: tcp 00:14:24.426 adrfam: ipv4 00:14:24.426 subtype: nvme subsystem 00:14:24.426 treq: not required 00:14:24.426 portid: 0 00:14:24.426 trsvcid: 4420 00:14:24.426 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:24.426 traddr: 10.0.0.2 00:14:24.426 eflags: none 00:14:24.426 sectype: none 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:24.426 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.337 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:26.337 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:26.337 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.337 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:26.337 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:26.337 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:28.251 /dev/nvme0n2 ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.251 rmmod nvme_tcp 00:14:28.251 rmmod nvme_fabrics 00:14:28.251 rmmod nvme_keyring 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3202611 ']' 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3202611 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3202611 ']' 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3202611 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3202611 00:14:28.251 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:28.252 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:28.252 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3202611' 00:14:28.252 killing process with pid 3202611 00:14:28.252 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3202611 00:14:28.252 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3202611 00:14:28.512 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.512 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.512 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.512 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.513 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.427 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.427 00:14:30.427 real 0m14.838s 00:14:30.427 user 0m22.505s 00:14:30.427 sys 0m6.171s 00:14:30.427 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:30.427 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.427 ************************************ 00:14:30.427 END TEST nvmf_nvme_cli 00:14:30.427 ************************************ 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.689 ************************************ 00:14:30.689 START TEST nvmf_vfio_user 00:14:30.689 ************************************ 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:30.689 * Looking for test storage... 00:14:30.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:30.689 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.689 --rc genhtml_branch_coverage=1 00:14:30.689 --rc genhtml_function_coverage=1 00:14:30.689 --rc genhtml_legend=1 00:14:30.689 --rc geninfo_all_blocks=1 00:14:30.689 --rc geninfo_unexecuted_blocks=1 00:14:30.689 00:14:30.689 ' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.689 --rc genhtml_branch_coverage=1 00:14:30.689 --rc genhtml_function_coverage=1 00:14:30.689 --rc genhtml_legend=1 00:14:30.689 --rc geninfo_all_blocks=1 00:14:30.689 --rc geninfo_unexecuted_blocks=1 00:14:30.689 00:14:30.689 ' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.689 --rc genhtml_branch_coverage=1 00:14:30.689 --rc genhtml_function_coverage=1 00:14:30.689 --rc genhtml_legend=1 00:14:30.689 --rc geninfo_all_blocks=1 00:14:30.689 --rc geninfo_unexecuted_blocks=1 00:14:30.689 00:14:30.689 ' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.689 --rc genhtml_branch_coverage=1 00:14:30.689 --rc genhtml_function_coverage=1 00:14:30.689 --rc genhtml_legend=1 00:14:30.689 --rc geninfo_all_blocks=1 00:14:30.689 --rc geninfo_unexecuted_blocks=1 00:14:30.689 00:14:30.689 ' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.689 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.951 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.951 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.951 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.951 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.951 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.951 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.951 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3204277 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3204277' 00:14:30.952 Process pid: 3204277 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3204277 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3204277 ']' 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.952 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:30.952 [2024-11-06 10:56:22.193850] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:14:30.952 [2024-11-06 10:56:22.193938] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.952 [2024-11-06 10:56:22.271207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.952 [2024-11-06 10:56:22.313341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.952 [2024-11-06 10:56:22.313381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.952 [2024-11-06 10:56:22.313390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.952 [2024-11-06 10:56:22.313396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.952 [2024-11-06 10:56:22.313402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.952 [2024-11-06 10:56:22.314950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.952 [2024-11-06 10:56:22.315066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.952 [2024-11-06 10:56:22.315221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.952 [2024-11-06 10:56:22.315222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.894 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.894 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:31.894 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:32.836 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:32.836 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:32.836 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:32.836 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:32.836 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:32.836 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:33.097 Malloc1 00:14:33.097 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:33.358 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:33.358 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:33.618 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:33.618 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:33.618 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:33.879 Malloc2 00:14:33.879 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:34.140 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:34.140 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:34.401 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:34.401 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:34.401 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:34.402 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:34.402 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:34.402 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:34.402 [2024-11-06 10:56:25.720183] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:14:34.402 [2024-11-06 10:56:25.720228] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204969 ] 00:14:34.402 [2024-11-06 10:56:25.774844] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:34.402 [2024-11-06 10:56:25.783086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:34.402 [2024-11-06 10:56:25.783109] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f715e3c0000 00:14:34.402 [2024-11-06 10:56:25.784083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.785085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.786085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.787096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.788109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.789102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.790110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.791116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.402 [2024-11-06 10:56:25.792125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:34.402 [2024-11-06 10:56:25.792135] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f715e3b5000 00:14:34.402 [2024-11-06 10:56:25.793461] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:34.402 [2024-11-06 10:56:25.814906] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:34.402 [2024-11-06 10:56:25.814931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:34.402 [2024-11-06 10:56:25.817253] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:34.402 [2024-11-06 10:56:25.817298] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:34.402 [2024-11-06 10:56:25.817384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:34.402 [2024-11-06 10:56:25.817401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:34.402 [2024-11-06 10:56:25.817410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:34.402 [2024-11-06 10:56:25.818263] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:34.402 [2024-11-06 10:56:25.818273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:34.402 [2024-11-06 10:56:25.818281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:34.402 [2024-11-06 10:56:25.819265] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:34.402 [2024-11-06 10:56:25.819275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:34.402 [2024-11-06 10:56:25.819283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:34.402 [2024-11-06 10:56:25.820271] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:34.402 [2024-11-06 10:56:25.820280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:34.402 [2024-11-06 10:56:25.821272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:34.402 [2024-11-06 10:56:25.821281] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:34.402 [2024-11-06 10:56:25.821286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:34.402 [2024-11-06 10:56:25.821293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:34.402 [2024-11-06 10:56:25.821402] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:34.402 [2024-11-06 10:56:25.821407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:34.402 [2024-11-06 10:56:25.821413] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:34.664 [2024-11-06 10:56:25.822281] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:34.664 [2024-11-06 10:56:25.823276] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:34.664 [2024-11-06 10:56:25.824288] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:34.664 [2024-11-06 10:56:25.825281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.664 [2024-11-06 10:56:25.825335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:34.664 [2024-11-06 10:56:25.826297] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:34.664 [2024-11-06 10:56:25.826305] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:34.664 [2024-11-06 10:56:25.826310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:34.664 [2024-11-06 10:56:25.826346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826360] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.664 [2024-11-06 10:56:25.826366] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.664 [2024-11-06 10:56:25.826370] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.664 [2024-11-06 10:56:25.826383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.664 [2024-11-06 10:56:25.826422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:34.664 [2024-11-06 10:56:25.826432] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:34.664 [2024-11-06 10:56:25.826437] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:34.664 [2024-11-06 10:56:25.826442] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:34.664 [2024-11-06 10:56:25.826446] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:34.664 [2024-11-06 10:56:25.826453] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:34.664 [2024-11-06 10:56:25.826458] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:34.664 [2024-11-06 10:56:25.826463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:34.664 [2024-11-06 10:56:25.826495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:34.664 [2024-11-06 10:56:25.826506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.664 [2024-11-06 10:56:25.826515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.664 [2024-11-06 10:56:25.826523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.664 [2024-11-06 10:56:25.826532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.664 [2024-11-06 10:56:25.826537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:34.664 [2024-11-06 10:56:25.826560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:34.664 [2024-11-06 10:56:25.826569] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:34.664 [2024-11-06 10:56:25.826575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:34.664 [2024-11-06 10:56:25.826609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:34.664 [2024-11-06 10:56:25.826671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826687] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:34.664 [2024-11-06 10:56:25.826691] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:34.664 [2024-11-06 10:56:25.826695] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.664 [2024-11-06 10:56:25.826701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:34.664 [2024-11-06 10:56:25.826717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:34.664 [2024-11-06 10:56:25.826726] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:34.664 [2024-11-06 10:56:25.826737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826765] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.664 [2024-11-06 10:56:25.826769] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.664 [2024-11-06 10:56:25.826773] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.664 [2024-11-06 10:56:25.826779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.664 [2024-11-06 10:56:25.826795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:34.664 [2024-11-06 10:56:25.826808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:34.664 [2024-11-06 10:56:25.826816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826823] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.665 [2024-11-06 10:56:25.826827] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.665 [2024-11-06 10:56:25.826831] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.665 [2024-11-06 10:56:25.826838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.826848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.826856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826893] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:34.665 [2024-11-06 10:56:25.826898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:34.665 [2024-11-06 10:56:25.826903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:34.665 [2024-11-06 10:56:25.826921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.826931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.826943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.826956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.826967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.826980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.826991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.826998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.827012] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:34.665 [2024-11-06 10:56:25.827017] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:34.665 [2024-11-06 10:56:25.827020] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:34.665 [2024-11-06 10:56:25.827024] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:34.665 [2024-11-06 10:56:25.827027] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:34.665 [2024-11-06 10:56:25.827034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:34.665 [2024-11-06 10:56:25.827041] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:34.665 [2024-11-06 10:56:25.827047] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:34.665 [2024-11-06 10:56:25.827051] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.665 [2024-11-06 10:56:25.827057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.827064] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:34.665 [2024-11-06 10:56:25.827069] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.665 [2024-11-06 10:56:25.827072] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.665 [2024-11-06 10:56:25.827078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.827086] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:34.665 [2024-11-06 10:56:25.827090] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:34.665 [2024-11-06 10:56:25.827094] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.665 [2024-11-06 10:56:25.827100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:34.665 [2024-11-06 10:56:25.827107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.827118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.827128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:34.665 [2024-11-06 10:56:25.827136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:34.665 ===================================================== 00:14:34.665 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.665 ===================================================== 00:14:34.665 Controller Capabilities/Features 00:14:34.665 ================================ 00:14:34.665 Vendor ID: 4e58 00:14:34.665 Subsystem Vendor ID: 4e58 00:14:34.665 Serial Number: SPDK1 00:14:34.665 Model Number: SPDK bdev Controller 00:14:34.665 Firmware Version: 25.01 00:14:34.665 Recommended Arb Burst: 6 00:14:34.665 IEEE OUI Identifier: 8d 6b 50 00:14:34.665 Multi-path I/O 00:14:34.665 May have multiple subsystem ports: Yes 00:14:34.665 May have multiple controllers: Yes 00:14:34.665 Associated with SR-IOV VF: No 00:14:34.665 Max Data Transfer Size: 131072 00:14:34.665 Max Number of Namespaces: 32 00:14:34.665 Max Number of I/O Queues: 127 00:14:34.665 NVMe Specification Version (VS): 1.3 00:14:34.665 NVMe Specification Version (Identify): 1.3 00:14:34.665 Maximum Queue Entries: 256 00:14:34.665 Contiguous Queues Required: Yes 00:14:34.665 Arbitration Mechanisms Supported 00:14:34.665 Weighted Round Robin: Not Supported 00:14:34.665 Vendor Specific: Not Supported 00:14:34.665 Reset Timeout: 15000 ms 00:14:34.665 Doorbell Stride: 4 bytes 00:14:34.665 NVM Subsystem Reset: Not Supported 00:14:34.665 Command Sets Supported 00:14:34.665 NVM Command Set: Supported 00:14:34.665 Boot Partition: Not Supported 00:14:34.665 Memory Page Size Minimum: 4096 bytes 00:14:34.665 Memory Page Size Maximum: 4096 bytes 00:14:34.665 Persistent Memory Region: Not Supported 00:14:34.665 Optional Asynchronous Events Supported 00:14:34.665 Namespace Attribute Notices: Supported 00:14:34.665 Firmware Activation Notices: Not Supported 00:14:34.665 ANA Change Notices: Not Supported 00:14:34.665 PLE Aggregate Log Change Notices: Not Supported 00:14:34.665 LBA Status Info Alert Notices: Not Supported 00:14:34.665 EGE Aggregate Log Change Notices: Not Supported 00:14:34.665 Normal NVM Subsystem Shutdown event: Not Supported 00:14:34.665 Zone Descriptor Change Notices: Not Supported 00:14:34.665 Discovery Log Change Notices: Not Supported 00:14:34.665 Controller Attributes 00:14:34.665 128-bit Host Identifier: Supported 00:14:34.665 Non-Operational Permissive Mode: Not Supported 00:14:34.665 NVM Sets: Not Supported 00:14:34.665 Read Recovery Levels: Not Supported 00:14:34.665 Endurance Groups: Not Supported 00:14:34.665 Predictable Latency Mode: Not Supported 00:14:34.665 Traffic Based Keep ALive: Not Supported 00:14:34.665 Namespace Granularity: Not Supported 00:14:34.665 SQ Associations: Not Supported 00:14:34.665 UUID List: Not Supported 00:14:34.665 Multi-Domain Subsystem: Not Supported 00:14:34.665 Fixed Capacity Management: Not Supported 00:14:34.665 Variable Capacity Management: Not Supported 00:14:34.665 Delete Endurance Group: Not Supported 00:14:34.665 Delete NVM Set: Not Supported 00:14:34.665 Extended LBA Formats Supported: Not Supported 00:14:34.665 Flexible Data Placement Supported: Not Supported 00:14:34.665 00:14:34.665 Controller Memory Buffer Support 00:14:34.665 ================================ 00:14:34.665 Supported: No 00:14:34.665 00:14:34.665 Persistent Memory Region Support 00:14:34.665 ================================ 00:14:34.665 Supported: No 00:14:34.665 00:14:34.665 Admin Command Set Attributes 00:14:34.665 ============================ 00:14:34.665 Security Send/Receive: Not Supported 00:14:34.665 Format NVM: Not Supported 00:14:34.665 Firmware Activate/Download: Not Supported 00:14:34.665 Namespace Management: Not Supported 00:14:34.665 Device Self-Test: Not Supported 00:14:34.665 Directives: Not Supported 00:14:34.665 NVMe-MI: Not Supported 00:14:34.665 Virtualization Management: Not Supported 00:14:34.665 Doorbell Buffer Config: Not Supported 00:14:34.665 Get LBA Status Capability: Not Supported 00:14:34.665 Command & Feature Lockdown Capability: Not Supported 00:14:34.665 Abort Command Limit: 4 00:14:34.665 Async Event Request Limit: 4 00:14:34.665 Number of Firmware Slots: N/A 00:14:34.665 Firmware Slot 1 Read-Only: N/A 00:14:34.665 Firmware Activation Without Reset: N/A 00:14:34.665 Multiple Update Detection Support: N/A 00:14:34.665 Firmware Update Granularity: No Information Provided 00:14:34.665 Per-Namespace SMART Log: No 00:14:34.666 Asymmetric Namespace Access Log Page: Not Supported 00:14:34.666 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:34.666 Command Effects Log Page: Supported 00:14:34.666 Get Log Page Extended Data: Supported 00:14:34.666 Telemetry Log Pages: Not Supported 00:14:34.666 Persistent Event Log Pages: Not Supported 00:14:34.666 Supported Log Pages Log Page: May Support 00:14:34.666 Commands Supported & Effects Log Page: Not Supported 00:14:34.666 Feature Identifiers & Effects Log Page:May Support 00:14:34.666 NVMe-MI Commands & Effects Log Page: May Support 00:14:34.666 Data Area 4 for Telemetry Log: Not Supported 00:14:34.666 Error Log Page Entries Supported: 128 00:14:34.666 Keep Alive: Supported 00:14:34.666 Keep Alive Granularity: 10000 ms 00:14:34.666 00:14:34.666 NVM Command Set Attributes 00:14:34.666 ========================== 00:14:34.666 Submission Queue Entry Size 00:14:34.666 Max: 64 00:14:34.666 Min: 64 00:14:34.666 Completion Queue Entry Size 00:14:34.666 Max: 16 00:14:34.666 Min: 16 00:14:34.666 Number of Namespaces: 32 00:14:34.666 Compare Command: Supported 00:14:34.666 Write Uncorrectable Command: Not Supported 00:14:34.666 Dataset Management Command: Supported 00:14:34.666 Write Zeroes Command: Supported 00:14:34.666 Set Features Save Field: Not Supported 00:14:34.666 Reservations: Not Supported 00:14:34.666 Timestamp: Not Supported 00:14:34.666 Copy: Supported 00:14:34.666 Volatile Write Cache: Present 00:14:34.666 Atomic Write Unit (Normal): 1 00:14:34.666 Atomic Write Unit (PFail): 1 00:14:34.666 Atomic Compare & Write Unit: 1 00:14:34.666 Fused Compare & Write: Supported 00:14:34.666 Scatter-Gather List 00:14:34.666 SGL Command Set: Supported (Dword aligned) 00:14:34.666 SGL Keyed: Not Supported 00:14:34.666 SGL Bit Bucket Descriptor: Not Supported 00:14:34.666 SGL Metadata Pointer: Not Supported 00:14:34.666 Oversized SGL: Not Supported 00:14:34.666 SGL Metadata Address: Not Supported 00:14:34.666 SGL Offset: Not Supported 00:14:34.666 Transport SGL Data Block: Not Supported 00:14:34.666 Replay Protected Memory Block: Not Supported 00:14:34.666 00:14:34.666 Firmware Slot Information 00:14:34.666 ========================= 00:14:34.666 Active slot: 1 00:14:34.666 Slot 1 Firmware Revision: 25.01 00:14:34.666 00:14:34.666 00:14:34.666 Commands Supported and Effects 00:14:34.666 ============================== 00:14:34.666 Admin Commands 00:14:34.666 -------------- 00:14:34.666 Get Log Page (02h): Supported 00:14:34.666 Identify (06h): Supported 00:14:34.666 Abort (08h): Supported 00:14:34.666 Set Features (09h): Supported 00:14:34.666 Get Features (0Ah): Supported 00:14:34.666 Asynchronous Event Request (0Ch): Supported 00:14:34.666 Keep Alive (18h): Supported 00:14:34.666 I/O Commands 00:14:34.666 ------------ 00:14:34.666 Flush (00h): Supported LBA-Change 00:14:34.666 Write (01h): Supported LBA-Change 00:14:34.666 Read (02h): Supported 00:14:34.666 Compare (05h): Supported 00:14:34.666 Write Zeroes (08h): Supported LBA-Change 00:14:34.666 Dataset Management (09h): Supported LBA-Change 00:14:34.666 Copy (19h): Supported LBA-Change 00:14:34.666 00:14:34.666 Error Log 00:14:34.666 ========= 00:14:34.666 00:14:34.666 Arbitration 00:14:34.666 =========== 00:14:34.666 Arbitration Burst: 1 00:14:34.666 00:14:34.666 Power Management 00:14:34.666 ================ 00:14:34.666 Number of Power States: 1 00:14:34.666 Current Power State: Power State #0 00:14:34.666 Power State #0: 00:14:34.666 Max Power: 0.00 W 00:14:34.666 Non-Operational State: Operational 00:14:34.666 Entry Latency: Not Reported 00:14:34.666 Exit Latency: Not Reported 00:14:34.666 Relative Read Throughput: 0 00:14:34.666 Relative Read Latency: 0 00:14:34.666 Relative Write Throughput: 0 00:14:34.666 Relative Write Latency: 0 00:14:34.666 Idle Power: Not Reported 00:14:34.666 Active Power: Not Reported 00:14:34.666 Non-Operational Permissive Mode: Not Supported 00:14:34.666 00:14:34.666 Health Information 00:14:34.666 ================== 00:14:34.666 Critical Warnings: 00:14:34.666 Available Spare Space: OK 00:14:34.666 Temperature: OK 00:14:34.666 Device Reliability: OK 00:14:34.666 Read Only: No 00:14:34.666 Volatile Memory Backup: OK 00:14:34.666 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:34.666 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:34.666 Available Spare: 0% 00:14:34.666 Available Sp[2024-11-06 10:56:25.827239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:34.666 [2024-11-06 10:56:25.827248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:34.666 [2024-11-06 10:56:25.827276] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:34.666 [2024-11-06 10:56:25.827287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.666 [2024-11-06 10:56:25.827293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.666 [2024-11-06 10:56:25.827300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.666 [2024-11-06 10:56:25.827306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.666 [2024-11-06 10:56:25.828304] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:34.666 [2024-11-06 10:56:25.828314] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:34.666 [2024-11-06 10:56:25.829306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.666 [2024-11-06 10:56:25.829348] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:34.666 [2024-11-06 10:56:25.829354] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:34.666 [2024-11-06 10:56:25.830312] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:34.666 [2024-11-06 10:56:25.830326] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:34.666 [2024-11-06 10:56:25.830386] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:34.666 [2024-11-06 10:56:25.834753] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:34.666 are Threshold: 0% 00:14:34.666 Life Percentage Used: 0% 00:14:34.666 Data Units Read: 0 00:14:34.666 Data Units Written: 0 00:14:34.666 Host Read Commands: 0 00:14:34.666 Host Write Commands: 0 00:14:34.666 Controller Busy Time: 0 minutes 00:14:34.666 Power Cycles: 0 00:14:34.666 Power On Hours: 0 hours 00:14:34.666 Unsafe Shutdowns: 0 00:14:34.666 Unrecoverable Media Errors: 0 00:14:34.666 Lifetime Error Log Entries: 0 00:14:34.666 Warning Temperature Time: 0 minutes 00:14:34.666 Critical Temperature Time: 0 minutes 00:14:34.666 00:14:34.666 Number of Queues 00:14:34.666 ================ 00:14:34.666 Number of I/O Submission Queues: 127 00:14:34.666 Number of I/O Completion Queues: 127 00:14:34.666 00:14:34.666 Active Namespaces 00:14:34.666 ================= 00:14:34.666 Namespace ID:1 00:14:34.666 Error Recovery Timeout: Unlimited 00:14:34.666 Command Set Identifier: NVM (00h) 00:14:34.666 Deallocate: Supported 00:14:34.666 Deallocated/Unwritten Error: Not Supported 00:14:34.666 Deallocated Read Value: Unknown 00:14:34.666 Deallocate in Write Zeroes: Not Supported 00:14:34.666 Deallocated Guard Field: 0xFFFF 00:14:34.666 Flush: Supported 00:14:34.666 Reservation: Supported 00:14:34.666 Namespace Sharing Capabilities: Multiple Controllers 00:14:34.666 Size (in LBAs): 131072 (0GiB) 00:14:34.666 Capacity (in LBAs): 131072 (0GiB) 00:14:34.666 Utilization (in LBAs): 131072 (0GiB) 00:14:34.666 NGUID: D16EEAA9D437419392AB5608A88304CA 00:14:34.666 UUID: d16eeaa9-d437-4193-92ab-5608a88304ca 00:14:34.666 Thin Provisioning: Not Supported 00:14:34.666 Per-NS Atomic Units: Yes 00:14:34.666 Atomic Boundary Size (Normal): 0 00:14:34.666 Atomic Boundary Size (PFail): 0 00:14:34.666 Atomic Boundary Offset: 0 00:14:34.666 Maximum Single Source Range Length: 65535 00:14:34.666 Maximum Copy Length: 65535 00:14:34.666 Maximum Source Range Count: 1 00:14:34.666 NGUID/EUI64 Never Reused: No 00:14:34.666 Namespace Write Protected: No 00:14:34.666 Number of LBA Formats: 1 00:14:34.666 Current LBA Format: LBA Format #00 00:14:34.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:34.666 00:14:34.666 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:34.667 [2024-11-06 10:56:26.039422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.031 Initializing NVMe Controllers 00:14:40.031 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:40.031 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:40.031 Initialization complete. Launching workers. 00:14:40.031 ======================================================== 00:14:40.031 Latency(us) 00:14:40.031 Device Information : IOPS MiB/s Average min max 00:14:40.031 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39993.46 156.22 3200.74 852.65 7758.17 00:14:40.031 ======================================================== 00:14:40.031 Total : 39993.46 156.22 3200.74 852.65 7758.17 00:14:40.031 00:14:40.031 [2024-11-06 10:56:31.060678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.031 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:40.031 [2024-11-06 10:56:31.252581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.323 Initializing NVMe Controllers 00:14:45.323 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:45.323 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:45.323 Initialization complete. Launching workers. 00:14:45.323 ======================================================== 00:14:45.323 Latency(us) 00:14:45.323 Device Information : IOPS MiB/s Average min max 00:14:45.323 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.31 5985.05 11303.09 00:14:45.323 ======================================================== 00:14:45.323 Total : 16051.20 62.70 7982.31 5985.05 11303.09 00:14:45.323 00:14:45.324 [2024-11-06 10:56:36.290574] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.324 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:45.324 [2024-11-06 10:56:36.494527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.613 [2024-11-06 10:56:41.563945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.613 Initializing NVMe Controllers 00:14:50.613 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.613 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:50.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:50.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:50.613 Initialization complete. Launching workers. 00:14:50.613 Starting thread on core 2 00:14:50.613 Starting thread on core 3 00:14:50.613 Starting thread on core 1 00:14:50.613 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:50.613 [2024-11-06 10:56:41.851162] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.913 [2024-11-06 10:56:44.913314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.914 Initializing NVMe Controllers 00:14:53.914 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:53.914 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:53.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:53.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:53.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:53.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:53.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:53.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:53.914 Initialization complete. Launching workers. 00:14:53.914 Starting thread on core 1 with urgent priority queue 00:14:53.914 Starting thread on core 2 with urgent priority queue 00:14:53.914 Starting thread on core 3 with urgent priority queue 00:14:53.914 Starting thread on core 0 with urgent priority queue 00:14:53.914 SPDK bdev Controller (SPDK1 ) core 0: 8995.33 IO/s 11.12 secs/100000 ios 00:14:53.914 SPDK bdev Controller (SPDK1 ) core 1: 13258.00 IO/s 7.54 secs/100000 ios 00:14:53.914 SPDK bdev Controller (SPDK1 ) core 2: 10235.00 IO/s 9.77 secs/100000 ios 00:14:53.914 SPDK bdev Controller (SPDK1 ) core 3: 11334.67 IO/s 8.82 secs/100000 ios 00:14:53.914 ======================================================== 00:14:53.914 00:14:53.914 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:53.914 [2024-11-06 10:56:45.202218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.914 Initializing NVMe Controllers 00:14:53.914 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:53.914 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:53.914 Namespace ID: 1 size: 0GB 00:14:53.914 Initialization complete. 00:14:53.914 INFO: using host memory buffer for IO 00:14:53.914 Hello world! 00:14:53.914 [2024-11-06 10:56:45.235418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.914 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:54.175 [2024-11-06 10:56:45.521199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.560 Initializing NVMe Controllers 00:14:55.561 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.561 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.561 Initialization complete. Launching workers. 00:14:55.561 submit (in ns) avg, min, max = 8276.1, 3905.8, 4000085.8 00:14:55.561 complete (in ns) avg, min, max = 18295.3, 2396.7, 3999729.2 00:14:55.561 00:14:55.561 Submit histogram 00:14:55.561 ================ 00:14:55.561 Range in us Cumulative Count 00:14:55.561 3.893 - 3.920: 0.6856% ( 130) 00:14:55.561 3.920 - 3.947: 4.4723% ( 718) 00:14:55.561 3.947 - 3.973: 13.7018% ( 1750) 00:14:55.561 3.973 - 4.000: 25.8742% ( 2308) 00:14:55.561 4.000 - 4.027: 36.6489% ( 2043) 00:14:55.561 4.027 - 4.053: 48.0249% ( 2157) 00:14:55.561 4.053 - 4.080: 63.3722% ( 2910) 00:14:55.561 4.080 - 4.107: 78.2448% ( 2820) 00:14:55.561 4.107 - 4.133: 89.6946% ( 2171) 00:14:55.561 4.133 - 4.160: 95.9760% ( 1191) 00:14:55.561 4.160 - 4.187: 98.2912% ( 439) 00:14:55.561 4.187 - 4.213: 99.1667% ( 166) 00:14:55.561 4.213 - 4.240: 99.4146% ( 47) 00:14:55.561 4.240 - 4.267: 99.4673% ( 10) 00:14:55.561 4.320 - 4.347: 99.4726% ( 1) 00:14:55.561 4.347 - 4.373: 99.4779% ( 1) 00:14:55.561 4.427 - 4.453: 99.4831% ( 1) 00:14:55.561 4.613 - 4.640: 99.4884% ( 1) 00:14:55.561 4.800 - 4.827: 99.4937% ( 1) 00:14:55.561 4.853 - 4.880: 99.4990% ( 1) 00:14:55.561 4.960 - 4.987: 99.5042% ( 1) 00:14:55.561 5.093 - 5.120: 99.5095% ( 1) 00:14:55.561 5.520 - 5.547: 99.5148% ( 1) 00:14:55.561 5.573 - 5.600: 99.5201% ( 1) 00:14:55.561 5.627 - 5.653: 99.5253% ( 1) 00:14:55.561 5.653 - 5.680: 99.5306% ( 1) 00:14:55.561 5.840 - 5.867: 99.5412% ( 2) 00:14:55.561 5.893 - 5.920: 99.5464% ( 1) 00:14:55.561 5.920 - 5.947: 99.5517% ( 1) 00:14:55.561 5.973 - 6.000: 99.5623% ( 2) 00:14:55.561 6.000 - 6.027: 99.5728% ( 2) 00:14:55.561 6.053 - 6.080: 99.6097% ( 7) 00:14:55.561 6.080 - 6.107: 99.6203% ( 2) 00:14:55.561 6.107 - 6.133: 99.6255% ( 1) 00:14:55.561 6.133 - 6.160: 99.6361% ( 2) 00:14:55.561 6.160 - 6.187: 99.6414% ( 1) 00:14:55.561 6.187 - 6.213: 99.6466% ( 1) 00:14:55.561 6.267 - 6.293: 99.6572% ( 2) 00:14:55.561 6.293 - 6.320: 99.6625% ( 1) 00:14:55.561 6.320 - 6.347: 99.6677% ( 1) 00:14:55.561 6.347 - 6.373: 99.6730% ( 1) 00:14:55.561 6.427 - 6.453: 99.6783% ( 1) 00:14:55.561 6.480 - 6.507: 99.6888% ( 2) 00:14:55.561 6.507 - 6.533: 99.6941% ( 1) 00:14:55.561 6.533 - 6.560: 99.6994% ( 1) 00:14:55.561 6.560 - 6.587: 99.7047% ( 1) 00:14:55.561 6.587 - 6.613: 99.7099% ( 1) 00:14:55.561 6.613 - 6.640: 99.7152% ( 1) 00:14:55.561 6.667 - 6.693: 99.7258% ( 2) 00:14:55.561 6.693 - 6.720: 99.7363% ( 2) 00:14:55.561 6.747 - 6.773: 99.7468% ( 2) 00:14:55.561 6.773 - 6.800: 99.7574% ( 2) 00:14:55.561 6.800 - 6.827: 99.7627% ( 1) 00:14:55.561 6.880 - 6.933: 99.7679% ( 1) 00:14:55.561 6.933 - 6.987: 99.7890% ( 4) 00:14:55.561 6.987 - 7.040: 99.7943% ( 1) 00:14:55.561 7.040 - 7.093: 99.8101% ( 3) 00:14:55.561 7.093 - 7.147: 99.8207% ( 2) 00:14:55.561 7.147 - 7.200: 99.8312% ( 2) 00:14:55.561 7.253 - 7.307: 99.8418% ( 2) 00:14:55.561 7.360 - 7.413: 99.8471% ( 1) 00:14:55.561 7.413 - 7.467: 99.8523% ( 1) 00:14:55.561 7.520 - 7.573: 99.8576% ( 1) 00:14:55.561 7.573 - 7.627: 99.8629% ( 1) 00:14:55.561 7.680 - 7.733: 99.8682% ( 1) 00:14:55.561 7.733 - 7.787: 99.8734% ( 1) 00:14:55.561 7.840 - 7.893: 99.8787% ( 1) 00:14:55.561 7.893 - 7.947: 99.8840% ( 1) 00:14:55.561 8.267 - 8.320: 99.8892% ( 1) 00:14:55.561 35.627 - 35.840: 99.8945% ( 1) 00:14:55.561 3986.773 - 4014.080: 100.0000% ( 20) 00:14:55.561 00:14:55.561 Complete histogram 00:14:55.561 ================== 00:14:55.561 Range in us Cumulative Count 00:14:55.561 2.387 - 2.400: 0.0527% ( 10) 00:14:55.561 2.400 - 2.413: 0.3797% ( 62) 00:14:55.561 2.413 - 2.427: 0.4799% ( 19) 00:14:55.561 2.427 - 2.440: 0.6171% ( 26) 00:14:55.561 2.440 - 2.453: 0.6962% ( 15) 00:14:55.561 2.453 - 2.467: 40.8945% ( 7622) 00:14:55.561 2.467 - [2024-11-06 10:56:46.544536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.561 2.480: 55.5139% ( 2772) 00:14:55.561 2.480 - 2.493: 69.3160% ( 2617) 00:14:55.561 2.493 - 2.507: 76.7048% ( 1401) 00:14:55.561 2.507 - 2.520: 80.7869% ( 774) 00:14:55.561 2.520 - 2.533: 85.0324% ( 805) 00:14:55.561 2.533 - 2.547: 91.0448% ( 1140) 00:14:55.561 2.547 - 2.560: 94.8631% ( 724) 00:14:55.561 2.560 - 2.573: 97.0202% ( 409) 00:14:55.561 2.573 - 2.587: 98.4811% ( 277) 00:14:55.561 2.587 - 2.600: 99.0876% ( 115) 00:14:55.561 2.600 - 2.613: 99.2986% ( 40) 00:14:55.561 2.613 - 2.627: 99.3408% ( 8) 00:14:55.561 2.627 - 2.640: 99.3618% ( 4) 00:14:55.561 2.653 - 2.667: 99.3671% ( 1) 00:14:55.561 2.760 - 2.773: 99.3724% ( 1) 00:14:55.561 4.293 - 4.320: 99.3777% ( 1) 00:14:55.561 4.320 - 4.347: 99.3829% ( 1) 00:14:55.561 4.373 - 4.400: 99.3882% ( 1) 00:14:55.561 4.400 - 4.427: 99.3935% ( 1) 00:14:55.561 4.427 - 4.453: 99.3988% ( 1) 00:14:55.561 4.507 - 4.533: 99.4040% ( 1) 00:14:55.561 4.693 - 4.720: 99.4146% ( 2) 00:14:55.561 4.880 - 4.907: 99.4199% ( 1) 00:14:55.561 4.907 - 4.933: 99.4304% ( 2) 00:14:55.561 4.960 - 4.987: 99.4357% ( 1) 00:14:55.561 4.987 - 5.013: 99.4410% ( 1) 00:14:55.561 5.040 - 5.067: 99.4462% ( 1) 00:14:55.561 5.067 - 5.093: 99.4515% ( 1) 00:14:55.561 5.093 - 5.120: 99.4568% ( 1) 00:14:55.561 5.253 - 5.280: 99.4621% ( 1) 00:14:55.561 5.333 - 5.360: 99.4673% ( 1) 00:14:55.561 5.440 - 5.467: 99.4779% ( 2) 00:14:55.561 5.600 - 5.627: 99.4831% ( 1) 00:14:55.561 5.627 - 5.653: 99.4884% ( 1) 00:14:55.561 5.653 - 5.680: 99.4937% ( 1) 00:14:55.561 5.707 - 5.733: 99.4990% ( 1) 00:14:55.561 5.787 - 5.813: 99.5042% ( 1) 00:14:55.561 5.840 - 5.867: 99.5095% ( 1) 00:14:55.561 5.867 - 5.893: 99.5148% ( 1) 00:14:55.561 5.893 - 5.920: 99.5201% ( 1) 00:14:55.561 5.947 - 5.973: 99.5253% ( 1) 00:14:55.561 6.000 - 6.027: 99.5306% ( 1) 00:14:55.561 6.027 - 6.053: 99.5359% ( 1) 00:14:55.561 6.160 - 6.187: 99.5464% ( 2) 00:14:55.561 6.187 - 6.213: 99.5570% ( 2) 00:14:55.561 6.240 - 6.267: 99.5623% ( 1) 00:14:55.561 6.427 - 6.453: 99.5675% ( 1) 00:14:55.561 6.613 - 6.640: 99.5728% ( 1) 00:14:55.561 6.667 - 6.693: 99.5781% ( 1) 00:14:55.561 8.373 - 8.427: 99.5834% ( 1) 00:14:55.561 9.760 - 9.813: 99.5886% ( 1) 00:14:55.561 10.027 - 10.080: 99.5939% ( 1) 00:14:55.561 11.733 - 11.787: 99.5992% ( 1) 00:14:55.561 13.013 - 13.067: 99.6045% ( 1) 00:14:55.561 3986.773 - 4014.080: 100.0000% ( 75) 00:14:55.561 00:14:55.561 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:55.561 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:55.561 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:55.561 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:55.561 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:55.561 [ 00:14:55.561 { 00:14:55.561 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:55.561 "subtype": "Discovery", 00:14:55.561 "listen_addresses": [], 00:14:55.561 "allow_any_host": true, 00:14:55.561 "hosts": [] 00:14:55.561 }, 00:14:55.561 { 00:14:55.561 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:55.561 "subtype": "NVMe", 00:14:55.561 "listen_addresses": [ 00:14:55.561 { 00:14:55.562 "trtype": "VFIOUSER", 00:14:55.562 "adrfam": "IPv4", 00:14:55.562 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:55.562 "trsvcid": "0" 00:14:55.562 } 00:14:55.562 ], 00:14:55.562 "allow_any_host": true, 00:14:55.562 "hosts": [], 00:14:55.562 "serial_number": "SPDK1", 00:14:55.562 "model_number": "SPDK bdev Controller", 00:14:55.562 "max_namespaces": 32, 00:14:55.562 "min_cntlid": 1, 00:14:55.562 "max_cntlid": 65519, 00:14:55.562 "namespaces": [ 00:14:55.562 { 00:14:55.562 "nsid": 1, 00:14:55.562 "bdev_name": "Malloc1", 00:14:55.562 "name": "Malloc1", 00:14:55.562 "nguid": "D16EEAA9D437419392AB5608A88304CA", 00:14:55.562 "uuid": "d16eeaa9-d437-4193-92ab-5608a88304ca" 00:14:55.562 } 00:14:55.562 ] 00:14:55.562 }, 00:14:55.562 { 00:14:55.562 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:55.562 "subtype": "NVMe", 00:14:55.562 "listen_addresses": [ 00:14:55.562 { 00:14:55.562 "trtype": "VFIOUSER", 00:14:55.562 "adrfam": "IPv4", 00:14:55.562 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:55.562 "trsvcid": "0" 00:14:55.562 } 00:14:55.562 ], 00:14:55.562 "allow_any_host": true, 00:14:55.562 "hosts": [], 00:14:55.562 "serial_number": "SPDK2", 00:14:55.562 "model_number": "SPDK bdev Controller", 00:14:55.562 "max_namespaces": 32, 00:14:55.562 "min_cntlid": 1, 00:14:55.562 "max_cntlid": 65519, 00:14:55.562 "namespaces": [ 00:14:55.562 { 00:14:55.562 "nsid": 1, 00:14:55.562 "bdev_name": "Malloc2", 00:14:55.562 "name": "Malloc2", 00:14:55.562 "nguid": "A453DC636EF54E8B8FF44AC916514B5C", 00:14:55.562 "uuid": "a453dc63-6ef5-4e8b-8ff4-4ac916514b5c" 00:14:55.562 } 00:14:55.562 ] 00:14:55.562 } 00:14:55.562 ] 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3209021 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:55.562 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:55.562 Malloc3 00:14:55.562 [2024-11-06 10:56:46.980194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.823 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:55.823 [2024-11-06 10:56:47.141396] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.824 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:55.824 Asynchronous Event Request test 00:14:55.824 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.824 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.824 Registering asynchronous event callbacks... 00:14:55.824 Starting namespace attribute notice tests for all controllers... 00:14:55.824 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:55.824 aer_cb - Changed Namespace 00:14:55.824 Cleaning up... 00:14:56.086 [ 00:14:56.086 { 00:14:56.086 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.086 "subtype": "Discovery", 00:14:56.086 "listen_addresses": [], 00:14:56.086 "allow_any_host": true, 00:14:56.086 "hosts": [] 00:14:56.086 }, 00:14:56.086 { 00:14:56.086 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.086 "subtype": "NVMe", 00:14:56.086 "listen_addresses": [ 00:14:56.086 { 00:14:56.086 "trtype": "VFIOUSER", 00:14:56.086 "adrfam": "IPv4", 00:14:56.086 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.086 "trsvcid": "0" 00:14:56.086 } 00:14:56.086 ], 00:14:56.086 "allow_any_host": true, 00:14:56.086 "hosts": [], 00:14:56.086 "serial_number": "SPDK1", 00:14:56.086 "model_number": "SPDK bdev Controller", 00:14:56.086 "max_namespaces": 32, 00:14:56.086 "min_cntlid": 1, 00:14:56.086 "max_cntlid": 65519, 00:14:56.086 "namespaces": [ 00:14:56.086 { 00:14:56.086 "nsid": 1, 00:14:56.086 "bdev_name": "Malloc1", 00:14:56.086 "name": "Malloc1", 00:14:56.086 "nguid": "D16EEAA9D437419392AB5608A88304CA", 00:14:56.086 "uuid": "d16eeaa9-d437-4193-92ab-5608a88304ca" 00:14:56.086 }, 00:14:56.086 { 00:14:56.086 "nsid": 2, 00:14:56.086 "bdev_name": "Malloc3", 00:14:56.086 "name": "Malloc3", 00:14:56.086 "nguid": "17D8EA1227CB49CBA86F7033F516D336", 00:14:56.086 "uuid": "17d8ea12-27cb-49cb-a86f-7033f516d336" 00:14:56.086 } 00:14:56.086 ] 00:14:56.086 }, 00:14:56.086 { 00:14:56.086 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.086 "subtype": "NVMe", 00:14:56.086 "listen_addresses": [ 00:14:56.086 { 00:14:56.086 "trtype": "VFIOUSER", 00:14:56.086 "adrfam": "IPv4", 00:14:56.086 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.086 "trsvcid": "0" 00:14:56.086 } 00:14:56.086 ], 00:14:56.086 "allow_any_host": true, 00:14:56.086 "hosts": [], 00:14:56.086 "serial_number": "SPDK2", 00:14:56.086 "model_number": "SPDK bdev Controller", 00:14:56.086 "max_namespaces": 32, 00:14:56.086 "min_cntlid": 1, 00:14:56.086 "max_cntlid": 65519, 00:14:56.086 "namespaces": [ 00:14:56.086 { 00:14:56.086 "nsid": 1, 00:14:56.086 "bdev_name": "Malloc2", 00:14:56.086 "name": "Malloc2", 00:14:56.086 "nguid": "A453DC636EF54E8B8FF44AC916514B5C", 00:14:56.086 "uuid": "a453dc63-6ef5-4e8b-8ff4-4ac916514b5c" 00:14:56.086 } 00:14:56.086 ] 00:14:56.086 } 00:14:56.086 ] 00:14:56.086 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3209021 00:14:56.086 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.086 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:56.086 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:56.086 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:56.086 [2024-11-06 10:56:47.383542] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:14:56.086 [2024-11-06 10:56:47.383612] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209188 ] 00:14:56.086 [2024-11-06 10:56:47.436103] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:56.086 [2024-11-06 10:56:47.449002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.086 [2024-11-06 10:56:47.449026] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f639cf88000 00:14:56.086 [2024-11-06 10:56:47.450006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.451008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.452013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.453015] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.454022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.455031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.456030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.457036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.086 [2024-11-06 10:56:47.458049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.086 [2024-11-06 10:56:47.458060] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f639cf7d000 00:14:56.086 [2024-11-06 10:56:47.459386] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.086 [2024-11-06 10:56:47.475591] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:56.086 [2024-11-06 10:56:47.475617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:56.086 [2024-11-06 10:56:47.477672] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:56.086 [2024-11-06 10:56:47.477721] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:56.086 [2024-11-06 10:56:47.477808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:56.086 [2024-11-06 10:56:47.477821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:56.086 [2024-11-06 10:56:47.477827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:56.086 [2024-11-06 10:56:47.479752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:56.086 [2024-11-06 10:56:47.479763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:56.086 [2024-11-06 10:56:47.479771] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:56.086 [2024-11-06 10:56:47.480681] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:56.086 [2024-11-06 10:56:47.480691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:56.086 [2024-11-06 10:56:47.480698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:56.086 [2024-11-06 10:56:47.481683] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:56.086 [2024-11-06 10:56:47.481693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:56.086 [2024-11-06 10:56:47.482696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:56.086 [2024-11-06 10:56:47.482705] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:56.086 [2024-11-06 10:56:47.482711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:56.086 [2024-11-06 10:56:47.482717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:56.086 [2024-11-06 10:56:47.482825] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:56.086 [2024-11-06 10:56:47.482831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:56.087 [2024-11-06 10:56:47.482836] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:56.087 [2024-11-06 10:56:47.483704] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:56.087 [2024-11-06 10:56:47.484705] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:56.087 [2024-11-06 10:56:47.485707] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:56.087 [2024-11-06 10:56:47.486714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.087 [2024-11-06 10:56:47.486760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:56.087 [2024-11-06 10:56:47.487720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:56.087 [2024-11-06 10:56:47.487729] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:56.087 [2024-11-06 10:56:47.487734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:56.087 [2024-11-06 10:56:47.487759] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:56.087 [2024-11-06 10:56:47.487767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:56.087 [2024-11-06 10:56:47.487780] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.087 [2024-11-06 10:56:47.487785] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.087 [2024-11-06 10:56:47.487788] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.087 [2024-11-06 10:56:47.487800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.087 [2024-11-06 10:56:47.493755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:56.087 [2024-11-06 10:56:47.493767] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:56.087 [2024-11-06 10:56:47.493773] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:56.087 [2024-11-06 10:56:47.493777] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:56.087 [2024-11-06 10:56:47.493785] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:56.087 [2024-11-06 10:56:47.493792] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:56.087 [2024-11-06 10:56:47.493796] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:56.087 [2024-11-06 10:56:47.493802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:56.087 [2024-11-06 10:56:47.493812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:56.087 [2024-11-06 10:56:47.493823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:56.087 [2024-11-06 10:56:47.501753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:56.087 [2024-11-06 10:56:47.501765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.087 [2024-11-06 10:56:47.501774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.087 [2024-11-06 10:56:47.501784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.087 [2024-11-06 10:56:47.501792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.087 [2024-11-06 10:56:47.501797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:56.087 [2024-11-06 10:56:47.501804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:56.087 [2024-11-06 10:56:47.501813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:56.349 [2024-11-06 10:56:47.509752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:56.349 [2024-11-06 10:56:47.509764] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:56.349 [2024-11-06 10:56:47.509769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.509776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.509782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.509791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.349 [2024-11-06 10:56:47.517754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:56.349 [2024-11-06 10:56:47.517821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.517830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.517837] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:56.349 [2024-11-06 10:56:47.517847] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:56.349 [2024-11-06 10:56:47.517851] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.349 [2024-11-06 10:56:47.517857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:56.349 [2024-11-06 10:56:47.525752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:56.349 [2024-11-06 10:56:47.525764] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:56.349 [2024-11-06 10:56:47.525777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.525785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.525792] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.349 [2024-11-06 10:56:47.525797] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.349 [2024-11-06 10:56:47.525800] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.349 [2024-11-06 10:56:47.525807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.349 [2024-11-06 10:56:47.533752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:56.349 [2024-11-06 10:56:47.533768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.533776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.533784] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.349 [2024-11-06 10:56:47.533788] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.349 [2024-11-06 10:56:47.533792] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.349 [2024-11-06 10:56:47.533798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.349 [2024-11-06 10:56:47.541753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:56.349 [2024-11-06 10:56:47.541763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.541770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.541779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.541785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.541790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.541795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.541801] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:56.349 [2024-11-06 10:56:47.541807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:56.349 [2024-11-06 10:56:47.541813] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:56.349 [2024-11-06 10:56:47.541830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:56.349 [2024-11-06 10:56:47.549752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:56.349 [2024-11-06 10:56:47.549767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:56.349 [2024-11-06 10:56:47.557752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:56.349 [2024-11-06 10:56:47.557766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:56.350 [2024-11-06 10:56:47.565751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:56.350 [2024-11-06 10:56:47.565765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.350 [2024-11-06 10:56:47.573754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:56.350 [2024-11-06 10:56:47.573770] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:56.350 [2024-11-06 10:56:47.573775] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:56.350 [2024-11-06 10:56:47.573779] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:56.350 [2024-11-06 10:56:47.573783] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:56.350 [2024-11-06 10:56:47.573786] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:56.350 [2024-11-06 10:56:47.573793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:56.350 [2024-11-06 10:56:47.573800] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:56.350 [2024-11-06 10:56:47.573805] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:56.350 [2024-11-06 10:56:47.573808] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.350 [2024-11-06 10:56:47.573814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:56.350 [2024-11-06 10:56:47.573822] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:56.350 [2024-11-06 10:56:47.573826] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.350 [2024-11-06 10:56:47.573829] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.350 [2024-11-06 10:56:47.573835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.350 [2024-11-06 10:56:47.573843] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:56.350 [2024-11-06 10:56:47.573848] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:56.350 [2024-11-06 10:56:47.573851] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:56.350 [2024-11-06 10:56:47.573857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:56.350 [2024-11-06 10:56:47.581752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:56.350 [2024-11-06 10:56:47.581767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:56.350 [2024-11-06 10:56:47.581778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:56.350 [2024-11-06 10:56:47.581785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:56.350 ===================================================== 00:14:56.350 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:56.350 ===================================================== 00:14:56.350 Controller Capabilities/Features 00:14:56.350 ================================ 00:14:56.350 Vendor ID: 4e58 00:14:56.350 Subsystem Vendor ID: 4e58 00:14:56.350 Serial Number: SPDK2 00:14:56.350 Model Number: SPDK bdev Controller 00:14:56.350 Firmware Version: 25.01 00:14:56.350 Recommended Arb Burst: 6 00:14:56.350 IEEE OUI Identifier: 8d 6b 50 00:14:56.350 Multi-path I/O 00:14:56.350 May have multiple subsystem ports: Yes 00:14:56.350 May have multiple controllers: Yes 00:14:56.350 Associated with SR-IOV VF: No 00:14:56.350 Max Data Transfer Size: 131072 00:14:56.350 Max Number of Namespaces: 32 00:14:56.350 Max Number of I/O Queues: 127 00:14:56.350 NVMe Specification Version (VS): 1.3 00:14:56.350 NVMe Specification Version (Identify): 1.3 00:14:56.350 Maximum Queue Entries: 256 00:14:56.350 Contiguous Queues Required: Yes 00:14:56.350 Arbitration Mechanisms Supported 00:14:56.350 Weighted Round Robin: Not Supported 00:14:56.350 Vendor Specific: Not Supported 00:14:56.350 Reset Timeout: 15000 ms 00:14:56.350 Doorbell Stride: 4 bytes 00:14:56.350 NVM Subsystem Reset: Not Supported 00:14:56.350 Command Sets Supported 00:14:56.350 NVM Command Set: Supported 00:14:56.350 Boot Partition: Not Supported 00:14:56.350 Memory Page Size Minimum: 4096 bytes 00:14:56.350 Memory Page Size Maximum: 4096 bytes 00:14:56.350 Persistent Memory Region: Not Supported 00:14:56.350 Optional Asynchronous Events Supported 00:14:56.350 Namespace Attribute Notices: Supported 00:14:56.350 Firmware Activation Notices: Not Supported 00:14:56.350 ANA Change Notices: Not Supported 00:14:56.350 PLE Aggregate Log Change Notices: Not Supported 00:14:56.350 LBA Status Info Alert Notices: Not Supported 00:14:56.350 EGE Aggregate Log Change Notices: Not Supported 00:14:56.350 Normal NVM Subsystem Shutdown event: Not Supported 00:14:56.350 Zone Descriptor Change Notices: Not Supported 00:14:56.350 Discovery Log Change Notices: Not Supported 00:14:56.350 Controller Attributes 00:14:56.350 128-bit Host Identifier: Supported 00:14:56.350 Non-Operational Permissive Mode: Not Supported 00:14:56.350 NVM Sets: Not Supported 00:14:56.350 Read Recovery Levels: Not Supported 00:14:56.350 Endurance Groups: Not Supported 00:14:56.350 Predictable Latency Mode: Not Supported 00:14:56.350 Traffic Based Keep ALive: Not Supported 00:14:56.350 Namespace Granularity: Not Supported 00:14:56.350 SQ Associations: Not Supported 00:14:56.350 UUID List: Not Supported 00:14:56.350 Multi-Domain Subsystem: Not Supported 00:14:56.350 Fixed Capacity Management: Not Supported 00:14:56.350 Variable Capacity Management: Not Supported 00:14:56.350 Delete Endurance Group: Not Supported 00:14:56.350 Delete NVM Set: Not Supported 00:14:56.350 Extended LBA Formats Supported: Not Supported 00:14:56.350 Flexible Data Placement Supported: Not Supported 00:14:56.350 00:14:56.350 Controller Memory Buffer Support 00:14:56.350 ================================ 00:14:56.350 Supported: No 00:14:56.350 00:14:56.350 Persistent Memory Region Support 00:14:56.350 ================================ 00:14:56.350 Supported: No 00:14:56.350 00:14:56.350 Admin Command Set Attributes 00:14:56.350 ============================ 00:14:56.350 Security Send/Receive: Not Supported 00:14:56.350 Format NVM: Not Supported 00:14:56.350 Firmware Activate/Download: Not Supported 00:14:56.350 Namespace Management: Not Supported 00:14:56.350 Device Self-Test: Not Supported 00:14:56.350 Directives: Not Supported 00:14:56.350 NVMe-MI: Not Supported 00:14:56.350 Virtualization Management: Not Supported 00:14:56.350 Doorbell Buffer Config: Not Supported 00:14:56.350 Get LBA Status Capability: Not Supported 00:14:56.350 Command & Feature Lockdown Capability: Not Supported 00:14:56.350 Abort Command Limit: 4 00:14:56.350 Async Event Request Limit: 4 00:14:56.350 Number of Firmware Slots: N/A 00:14:56.350 Firmware Slot 1 Read-Only: N/A 00:14:56.350 Firmware Activation Without Reset: N/A 00:14:56.350 Multiple Update Detection Support: N/A 00:14:56.350 Firmware Update Granularity: No Information Provided 00:14:56.350 Per-Namespace SMART Log: No 00:14:56.350 Asymmetric Namespace Access Log Page: Not Supported 00:14:56.350 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:56.350 Command Effects Log Page: Supported 00:14:56.350 Get Log Page Extended Data: Supported 00:14:56.350 Telemetry Log Pages: Not Supported 00:14:56.350 Persistent Event Log Pages: Not Supported 00:14:56.350 Supported Log Pages Log Page: May Support 00:14:56.350 Commands Supported & Effects Log Page: Not Supported 00:14:56.350 Feature Identifiers & Effects Log Page:May Support 00:14:56.350 NVMe-MI Commands & Effects Log Page: May Support 00:14:56.350 Data Area 4 for Telemetry Log: Not Supported 00:14:56.350 Error Log Page Entries Supported: 128 00:14:56.350 Keep Alive: Supported 00:14:56.350 Keep Alive Granularity: 10000 ms 00:14:56.350 00:14:56.350 NVM Command Set Attributes 00:14:56.350 ========================== 00:14:56.350 Submission Queue Entry Size 00:14:56.350 Max: 64 00:14:56.350 Min: 64 00:14:56.350 Completion Queue Entry Size 00:14:56.350 Max: 16 00:14:56.350 Min: 16 00:14:56.350 Number of Namespaces: 32 00:14:56.350 Compare Command: Supported 00:14:56.350 Write Uncorrectable Command: Not Supported 00:14:56.350 Dataset Management Command: Supported 00:14:56.350 Write Zeroes Command: Supported 00:14:56.350 Set Features Save Field: Not Supported 00:14:56.350 Reservations: Not Supported 00:14:56.350 Timestamp: Not Supported 00:14:56.350 Copy: Supported 00:14:56.350 Volatile Write Cache: Present 00:14:56.350 Atomic Write Unit (Normal): 1 00:14:56.350 Atomic Write Unit (PFail): 1 00:14:56.350 Atomic Compare & Write Unit: 1 00:14:56.350 Fused Compare & Write: Supported 00:14:56.350 Scatter-Gather List 00:14:56.350 SGL Command Set: Supported (Dword aligned) 00:14:56.350 SGL Keyed: Not Supported 00:14:56.350 SGL Bit Bucket Descriptor: Not Supported 00:14:56.350 SGL Metadata Pointer: Not Supported 00:14:56.350 Oversized SGL: Not Supported 00:14:56.350 SGL Metadata Address: Not Supported 00:14:56.350 SGL Offset: Not Supported 00:14:56.351 Transport SGL Data Block: Not Supported 00:14:56.351 Replay Protected Memory Block: Not Supported 00:14:56.351 00:14:56.351 Firmware Slot Information 00:14:56.351 ========================= 00:14:56.351 Active slot: 1 00:14:56.351 Slot 1 Firmware Revision: 25.01 00:14:56.351 00:14:56.351 00:14:56.351 Commands Supported and Effects 00:14:56.351 ============================== 00:14:56.351 Admin Commands 00:14:56.351 -------------- 00:14:56.351 Get Log Page (02h): Supported 00:14:56.351 Identify (06h): Supported 00:14:56.351 Abort (08h): Supported 00:14:56.351 Set Features (09h): Supported 00:14:56.351 Get Features (0Ah): Supported 00:14:56.351 Asynchronous Event Request (0Ch): Supported 00:14:56.351 Keep Alive (18h): Supported 00:14:56.351 I/O Commands 00:14:56.351 ------------ 00:14:56.351 Flush (00h): Supported LBA-Change 00:14:56.351 Write (01h): Supported LBA-Change 00:14:56.351 Read (02h): Supported 00:14:56.351 Compare (05h): Supported 00:14:56.351 Write Zeroes (08h): Supported LBA-Change 00:14:56.351 Dataset Management (09h): Supported LBA-Change 00:14:56.351 Copy (19h): Supported LBA-Change 00:14:56.351 00:14:56.351 Error Log 00:14:56.351 ========= 00:14:56.351 00:14:56.351 Arbitration 00:14:56.351 =========== 00:14:56.351 Arbitration Burst: 1 00:14:56.351 00:14:56.351 Power Management 00:14:56.351 ================ 00:14:56.351 Number of Power States: 1 00:14:56.351 Current Power State: Power State #0 00:14:56.351 Power State #0: 00:14:56.351 Max Power: 0.00 W 00:14:56.351 Non-Operational State: Operational 00:14:56.351 Entry Latency: Not Reported 00:14:56.351 Exit Latency: Not Reported 00:14:56.351 Relative Read Throughput: 0 00:14:56.351 Relative Read Latency: 0 00:14:56.351 Relative Write Throughput: 0 00:14:56.351 Relative Write Latency: 0 00:14:56.351 Idle Power: Not Reported 00:14:56.351 Active Power: Not Reported 00:14:56.351 Non-Operational Permissive Mode: Not Supported 00:14:56.351 00:14:56.351 Health Information 00:14:56.351 ================== 00:14:56.351 Critical Warnings: 00:14:56.351 Available Spare Space: OK 00:14:56.351 Temperature: OK 00:14:56.351 Device Reliability: OK 00:14:56.351 Read Only: No 00:14:56.351 Volatile Memory Backup: OK 00:14:56.351 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:56.351 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:56.351 Available Spare: 0% 00:14:56.351 Available Sp[2024-11-06 10:56:47.581887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:56.351 [2024-11-06 10:56:47.589752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:56.351 [2024-11-06 10:56:47.589784] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:56.351 [2024-11-06 10:56:47.589793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.351 [2024-11-06 10:56:47.589800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.351 [2024-11-06 10:56:47.589807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.351 [2024-11-06 10:56:47.589813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.351 [2024-11-06 10:56:47.589860] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:56.351 [2024-11-06 10:56:47.589871] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:56.351 [2024-11-06 10:56:47.590865] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.351 [2024-11-06 10:56:47.590916] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:56.351 [2024-11-06 10:56:47.590923] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:56.351 [2024-11-06 10:56:47.591871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:56.351 [2024-11-06 10:56:47.591884] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:56.351 [2024-11-06 10:56:47.591932] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:56.351 [2024-11-06 10:56:47.594752] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.351 are Threshold: 0% 00:14:56.351 Life Percentage Used: 0% 00:14:56.351 Data Units Read: 0 00:14:56.351 Data Units Written: 0 00:14:56.351 Host Read Commands: 0 00:14:56.351 Host Write Commands: 0 00:14:56.351 Controller Busy Time: 0 minutes 00:14:56.351 Power Cycles: 0 00:14:56.351 Power On Hours: 0 hours 00:14:56.351 Unsafe Shutdowns: 0 00:14:56.351 Unrecoverable Media Errors: 0 00:14:56.351 Lifetime Error Log Entries: 0 00:14:56.351 Warning Temperature Time: 0 minutes 00:14:56.351 Critical Temperature Time: 0 minutes 00:14:56.351 00:14:56.351 Number of Queues 00:14:56.351 ================ 00:14:56.351 Number of I/O Submission Queues: 127 00:14:56.351 Number of I/O Completion Queues: 127 00:14:56.351 00:14:56.351 Active Namespaces 00:14:56.351 ================= 00:14:56.351 Namespace ID:1 00:14:56.351 Error Recovery Timeout: Unlimited 00:14:56.351 Command Set Identifier: NVM (00h) 00:14:56.351 Deallocate: Supported 00:14:56.351 Deallocated/Unwritten Error: Not Supported 00:14:56.351 Deallocated Read Value: Unknown 00:14:56.351 Deallocate in Write Zeroes: Not Supported 00:14:56.351 Deallocated Guard Field: 0xFFFF 00:14:56.351 Flush: Supported 00:14:56.351 Reservation: Supported 00:14:56.351 Namespace Sharing Capabilities: Multiple Controllers 00:14:56.351 Size (in LBAs): 131072 (0GiB) 00:14:56.351 Capacity (in LBAs): 131072 (0GiB) 00:14:56.351 Utilization (in LBAs): 131072 (0GiB) 00:14:56.351 NGUID: A453DC636EF54E8B8FF44AC916514B5C 00:14:56.351 UUID: a453dc63-6ef5-4e8b-8ff4-4ac916514b5c 00:14:56.351 Thin Provisioning: Not Supported 00:14:56.351 Per-NS Atomic Units: Yes 00:14:56.351 Atomic Boundary Size (Normal): 0 00:14:56.351 Atomic Boundary Size (PFail): 0 00:14:56.351 Atomic Boundary Offset: 0 00:14:56.351 Maximum Single Source Range Length: 65535 00:14:56.351 Maximum Copy Length: 65535 00:14:56.351 Maximum Source Range Count: 1 00:14:56.351 NGUID/EUI64 Never Reused: No 00:14:56.351 Namespace Write Protected: No 00:14:56.351 Number of LBA Formats: 1 00:14:56.351 Current LBA Format: LBA Format #00 00:14:56.351 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:56.351 00:14:56.351 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:56.612 [2024-11-06 10:56:47.787816] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.899 Initializing NVMe Controllers 00:15:01.899 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.899 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:01.899 Initialization complete. Launching workers. 00:15:01.899 ======================================================== 00:15:01.899 Latency(us) 00:15:01.899 Device Information : IOPS MiB/s Average min max 00:15:01.899 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39992.10 156.22 3200.30 844.72 8775.59 00:15:01.899 ======================================================== 00:15:01.899 Total : 39992.10 156.22 3200.30 844.72 8775.59 00:15:01.899 00:15:01.899 [2024-11-06 10:56:52.891934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.899 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:01.899 [2024-11-06 10:56:53.080524] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.185 Initializing NVMe Controllers 00:15:07.185 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:07.185 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:07.185 Initialization complete. Launching workers. 00:15:07.185 ======================================================== 00:15:07.185 Latency(us) 00:15:07.185 Device Information : IOPS MiB/s Average min max 00:15:07.185 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35236.21 137.64 3632.26 1104.82 10665.78 00:15:07.185 ======================================================== 00:15:07.185 Total : 35236.21 137.64 3632.26 1104.82 10665.78 00:15:07.185 00:15:07.185 [2024-11-06 10:56:58.101310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.185 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:07.185 [2024-11-06 10:56:58.303511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.471 [2024-11-06 10:57:03.442833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.471 Initializing NVMe Controllers 00:15:12.471 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.472 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:12.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:12.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:12.472 Initialization complete. Launching workers. 00:15:12.472 Starting thread on core 2 00:15:12.472 Starting thread on core 3 00:15:12.472 Starting thread on core 1 00:15:12.472 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:12.472 [2024-11-06 10:57:03.725173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.771 [2024-11-06 10:57:06.778165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.771 Initializing NVMe Controllers 00:15:15.771 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.771 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.771 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:15.771 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:15.771 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:15.771 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:15.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:15.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:15.771 Initialization complete. Launching workers. 00:15:15.771 Starting thread on core 1 with urgent priority queue 00:15:15.771 Starting thread on core 2 with urgent priority queue 00:15:15.771 Starting thread on core 3 with urgent priority queue 00:15:15.771 Starting thread on core 0 with urgent priority queue 00:15:15.771 SPDK bdev Controller (SPDK2 ) core 0: 16554.67 IO/s 6.04 secs/100000 ios 00:15:15.771 SPDK bdev Controller (SPDK2 ) core 1: 7214.33 IO/s 13.86 secs/100000 ios 00:15:15.771 SPDK bdev Controller (SPDK2 ) core 2: 9302.67 IO/s 10.75 secs/100000 ios 00:15:15.771 SPDK bdev Controller (SPDK2 ) core 3: 12215.67 IO/s 8.19 secs/100000 ios 00:15:15.771 ======================================================== 00:15:15.771 00:15:15.771 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:15.771 [2024-11-06 10:57:07.061153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.771 Initializing NVMe Controllers 00:15:15.771 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.771 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:15.771 Namespace ID: 1 size: 0GB 00:15:15.771 Initialization complete. 00:15:15.771 INFO: using host memory buffer for IO 00:15:15.771 Hello world! 00:15:15.771 [2024-11-06 10:57:07.071237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.771 10:57:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:16.031 [2024-11-06 10:57:07.356032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.414 Initializing NVMe Controllers 00:15:17.414 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.414 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.414 Initialization complete. Launching workers. 00:15:17.414 submit (in ns) avg, min, max = 7668.5, 3902.5, 4997675.8 00:15:17.414 complete (in ns) avg, min, max = 18815.5, 2410.8, 4007747.5 00:15:17.414 00:15:17.414 Submit histogram 00:15:17.414 ================ 00:15:17.414 Range in us Cumulative Count 00:15:17.414 3.893 - 3.920: 0.9380% ( 177) 00:15:17.414 3.920 - 3.947: 6.1738% ( 988) 00:15:17.414 3.947 - 3.973: 15.7711% ( 1811) 00:15:17.414 3.973 - 4.000: 26.6084% ( 2045) 00:15:17.414 4.000 - 4.027: 36.5660% ( 1879) 00:15:17.414 4.027 - 4.053: 46.7674% ( 1925) 00:15:17.414 4.053 - 4.080: 61.5050% ( 2781) 00:15:17.414 4.080 - 4.107: 78.2353% ( 3157) 00:15:17.414 4.107 - 4.133: 90.4028% ( 2296) 00:15:17.414 4.133 - 4.160: 96.4706% ( 1145) 00:15:17.414 4.160 - 4.187: 98.5374% ( 390) 00:15:17.414 4.187 - 4.213: 99.1309% ( 112) 00:15:17.414 4.213 - 4.240: 99.2316% ( 19) 00:15:17.414 4.240 - 4.267: 99.2952% ( 12) 00:15:17.414 4.267 - 4.293: 99.3005% ( 1) 00:15:17.414 4.347 - 4.373: 99.3058% ( 1) 00:15:17.414 4.373 - 4.400: 99.3111% ( 1) 00:15:17.414 4.400 - 4.427: 99.3164% ( 1) 00:15:17.414 4.453 - 4.480: 99.3217% ( 1) 00:15:17.414 4.507 - 4.533: 99.3270% ( 1) 00:15:17.414 4.560 - 4.587: 99.3376% ( 2) 00:15:17.414 4.587 - 4.613: 99.3482% ( 2) 00:15:17.414 4.613 - 4.640: 99.3535% ( 1) 00:15:17.414 4.667 - 4.693: 99.3588% ( 1) 00:15:17.414 4.693 - 4.720: 99.3641% ( 1) 00:15:17.414 4.720 - 4.747: 99.3694% ( 1) 00:15:17.414 4.800 - 4.827: 99.3747% ( 1) 00:15:17.414 4.827 - 4.853: 99.3853% ( 2) 00:15:17.414 4.853 - 4.880: 99.3906% ( 1) 00:15:17.414 4.880 - 4.907: 99.4012% ( 2) 00:15:17.414 4.907 - 4.933: 99.4065% ( 1) 00:15:17.414 4.960 - 4.987: 99.4118% ( 1) 00:15:17.414 5.013 - 5.040: 99.4171% ( 1) 00:15:17.414 5.040 - 5.067: 99.4383% ( 4) 00:15:17.414 5.067 - 5.093: 99.4489% ( 2) 00:15:17.414 5.120 - 5.147: 99.4595% ( 2) 00:15:17.414 5.200 - 5.227: 99.4648% ( 1) 00:15:17.414 5.333 - 5.360: 99.4701% ( 1) 00:15:17.414 5.493 - 5.520: 99.4754% ( 1) 00:15:17.414 5.520 - 5.547: 99.4807% ( 1) 00:15:17.414 5.573 - 5.600: 99.4860% ( 1) 00:15:17.414 5.653 - 5.680: 99.4913% ( 1) 00:15:17.414 5.680 - 5.707: 99.5019% ( 2) 00:15:17.414 5.707 - 5.733: 99.5125% ( 2) 00:15:17.414 5.787 - 5.813: 99.5178% ( 1) 00:15:17.414 5.813 - 5.840: 99.5231% ( 1) 00:15:17.414 5.840 - 5.867: 99.5284% ( 1) 00:15:17.414 5.867 - 5.893: 99.5337% ( 1) 00:15:17.414 5.893 - 5.920: 99.5443% ( 2) 00:15:17.414 6.107 - 6.133: 99.5495% ( 1) 00:15:17.414 6.133 - 6.160: 99.5548% ( 1) 00:15:17.414 6.187 - 6.213: 99.5654% ( 2) 00:15:17.414 6.213 - 6.240: 99.5760% ( 2) 00:15:17.414 6.267 - 6.293: 99.5866% ( 2) 00:15:17.414 6.347 - 6.373: 99.5972% ( 2) 00:15:17.414 6.400 - 6.427: 99.6025% ( 1) 00:15:17.414 6.453 - 6.480: 99.6078% ( 1) 00:15:17.414 6.533 - 6.560: 99.6131% ( 1) 00:15:17.414 6.560 - 6.587: 99.6237% ( 2) 00:15:17.414 6.640 - 6.667: 99.6343% ( 2) 00:15:17.414 6.827 - 6.880: 99.6502% ( 3) 00:15:17.414 6.880 - 6.933: 99.6608% ( 2) 00:15:17.414 6.933 - 6.987: 99.6714% ( 2) 00:15:17.414 6.987 - 7.040: 99.6820% ( 2) 00:15:17.415 7.040 - 7.093: 99.6873% ( 1) 00:15:17.415 7.093 - 7.147: 99.6979% ( 2) 00:15:17.415 7.147 - 7.200: 99.7138% ( 3) 00:15:17.415 7.200 - 7.253: 99.7191% ( 1) 00:15:17.415 7.307 - 7.360: 99.7297% ( 2) 00:15:17.415 7.360 - 7.413: 99.7350% ( 1) 00:15:17.415 7.413 - 7.467: 99.7403% ( 1) 00:15:17.415 7.467 - 7.520: 99.7456% ( 1) 00:15:17.415 7.520 - 7.573: 99.7668% ( 4) 00:15:17.415 7.573 - 7.627: 99.7721% ( 1) 00:15:17.415 7.627 - 7.680: 99.7933% ( 4) 00:15:17.415 7.680 - 7.733: 99.7986% ( 1) 00:15:17.415 7.733 - 7.787: 99.8092% ( 2) 00:15:17.415 7.787 - 7.840: 99.8304% ( 4) 00:15:17.415 7.840 - 7.893: 99.8410% ( 2) 00:15:17.415 7.947 - 8.000: 99.8516% ( 2) 00:15:17.415 8.000 - 8.053: 99.8622% ( 2) 00:15:17.415 8.053 - 8.107: 99.8675% ( 1) 00:15:17.415 8.160 - 8.213: 99.8728% ( 1) 00:15:17.415 8.213 - 8.267: 99.8781% ( 1) 00:15:17.415 8.320 - 8.373: 99.8834% ( 1) 00:15:17.415 8.480 - 8.533: 99.8887% ( 1) 00:15:17.415 8.533 - 8.587: 99.8940% ( 1) 00:15:17.415 8.907 - 8.960: 99.8993% ( 1) 00:15:17.415 9.173 - 9.227: 99.9046% ( 1) 00:15:17.415 9.600 - 9.653: 99.9099% ( 1) 00:15:17.415 3072.000 - 3085.653: 99.9152% ( 1) 00:15:17.415 3986.773 - 4014.080: 99.9947% ( 15) 00:15:17.415 4997.120 - 5024.427: 100.0000% ( 1) 00:15:17.415 00:15:17.415 Complete histogram 00:15:17.415 ================== 00:15:17.415 Range in us Cumulative Count 00:15:17.415 2.400 - 2.413: 0.0318% ( 6) 00:15:17.415 2.413 - 2.427: 0.3975% ( 69) 00:15:17.415 2.427 - 2.440: 0.4875% ( 17) 00:15:17.415 2.440 - 2.453: 0.5988% ( 21) 00:15:17.415 2.453 - 2.467: 20.4928% ( 3754) 00:15:17.415 2.467 - 2.480: 52.5914% ( 6057) 00:15:17.415 2.480 - 2.493: 62.1463% ( 1803) 00:15:17.415 2.493 - 2.507: 74.4197% ( 2316) 00:15:17.415 2.507 - 2.520: 78.7016% ( 808) 00:15:17.415 2.520 - 2.533: 81.6322% ( 553) 00:15:17.415 2.533 - 2.547: 86.5342% ( 925) 00:15:17.415 2.547 - 2.560: 92.4430% ( 1115) 00:15:17.415 2.560 - 2.573: 95.6121% ( 598) 00:15:17.415 2.573 - 2.587: 97.5888% ( 373) 00:15:17.415 2.587 - 2.600: 98.6222% ( 195) 00:15:17.415 2.600 - 2.613: 99.0514% ( 81) 00:15:17.415 2.613 - 2.627: 99.1892% ( 26) 00:15:17.415 2.627 - 2.640: 99.2475% ( 11) 00:15:17.415 2.640 - 2.653: 99.2581% ( 2) 00:15:17.415 2.653 - 2.667: 99.2634% ( 1) 00:15:17.415 2.733 - 2.747: 99.2687% ( 1) 00:15:17.415 2.773 - 2.787: 99.2740% ( 1) 00:15:17.415 2.787 - 2.800: 99.2793% ( 1) 00:15:17.415 2.800 - 2.813: 99.2846% ( 1) 00:15:17.415 2.827 - 2.840: 99.2899% ( 1) 00:15:17.415 2.840 - 2.853: 99.2952% ( 1) 00:15:17.415 2.853 - 2.867: 99.3005% ( 1) 00:15:17.415 2.867 - 2.880: 99.3058% ( 1) 00:15:17.415 2.880 - 2.893: 99.3111% ( 1) 00:15:17.415 3.013 - 3.027: 99.3164% ( 1) 00:15:17.415 3.053 - 3.067: 99.3217% ( 1) 00:15:17.415 3.080 - 3.093: 99.3323% ( 2) 00:15:17.415 3.107 - 3.120: 99.3376% ( 1) 00:15:17.415 3.120 - 3.133: 99.3429% ( 1) 00:15:17.415 3.133 - 3.147: 99.3482% ( 1) 00:15:17.415 3.240 - 3.253: 99.3535% ( 1) 00:15:17.415 3.267 - 3.280: 99.3588% ( 1) 00:15:17.415 3.400 - 3.413: 99.3641% ( 1) 00:15:17.415 4.427 - 4.453: 99.3694% ( 1) 00:15:17.415 4.480 - 4.507: 99.3747% ( 1) 00:15:17.415 4.587 - 4.613: 99.3800% ( 1) 00:15:17.415 4.667 - 4.693: 99.3853% ( 1) 00:15:17.415 4.747 - 4.773: 99.3906% ( 1) 00:15:17.415 4.773 - 4.800: 99.4012% ( 2) 00:15:17.415 4.800 - 4.827: 99.4065% ( 1) 00:15:17.415 4.907 - 4.933: 99.4118% ( 1) 00:15:17.415 4.933 - 4.960: 99.4224% ( 2) 00:15:17.415 4.960 - 4.987: 99.4277% ( 1) 00:15:17.415 4.987 - 5.013: 99.4330% ( 1) 00:15:17.415 5.093 - 5.120: 99.4436% ( 2) 00:15:17.415 5.280 - 5.307: 99.4542% ( 2) 00:15:17.415 5.413 - 5.440: 99.4595% ( 1) 00:15:17.415 5.440 - 5.467: 99.4648% ( 1) 00:15:17.415 5.547 - 5.573: 99.4754% ( 2) 00:15:17.415 5.680 - 5.707: 99.4807% ( 1) 00:15:17.415 5.707 - 5.733: 99.4860% ( 1) 00:15:17.415 5.760 - 5.787: 99.4913% ( 1) 00:15:17.415 5.787 - 5.813: 99.5019% ( 2) 00:15:17.415 5.840 - 5.867: 99.5072% ( 1) 00:15:17.415 5.867 - 5.893: 99.5178% ( 2) 00:15:17.415 5.920 - 5.947: 99.5231% ( 1) 00:15:17.415 6.000 - 6.027: 99.5284% ( 1) 00:15:17.415 6.107 - 6.133: 99.5337% ( 1) 00:15:17.415 6.213 - 6.240: 99.5443% ( 2) 00:15:17.415 6.293 - 6.320: 99.5495% ( 1) 00:15:17.415 6.400 - 6.427: 99.5548% ( 1) 00:15:17.415 6.587 - 6.613: 99.5601% ( 1) 00:15:17.415 6.613 - 6.640: 99.5654% ( 1) 00:15:17.415 10.080 - 10.133: 99.5707% ( 1) 00:15:17.415 11.680 - 11.733: 99.5760% ( 1) 00:15:17.415 13.867 - 13.973: 99.5813% ( 1) 00:15:17.415 43.733 - 43.9[2024-11-06 10:57:08.451420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.415 47: 99.5866% ( 1) 00:15:17.415 167.253 - 168.107: 99.5919% ( 1) 00:15:17.415 3986.773 - 4014.080: 100.0000% ( 77) 00:15:17.415 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.415 [ 00:15:17.415 { 00:15:17.415 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.415 "subtype": "Discovery", 00:15:17.415 "listen_addresses": [], 00:15:17.415 "allow_any_host": true, 00:15:17.415 "hosts": [] 00:15:17.415 }, 00:15:17.415 { 00:15:17.415 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.415 "subtype": "NVMe", 00:15:17.415 "listen_addresses": [ 00:15:17.415 { 00:15:17.415 "trtype": "VFIOUSER", 00:15:17.415 "adrfam": "IPv4", 00:15:17.415 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.415 "trsvcid": "0" 00:15:17.415 } 00:15:17.415 ], 00:15:17.415 "allow_any_host": true, 00:15:17.415 "hosts": [], 00:15:17.415 "serial_number": "SPDK1", 00:15:17.415 "model_number": "SPDK bdev Controller", 00:15:17.415 "max_namespaces": 32, 00:15:17.415 "min_cntlid": 1, 00:15:17.415 "max_cntlid": 65519, 00:15:17.415 "namespaces": [ 00:15:17.415 { 00:15:17.415 "nsid": 1, 00:15:17.415 "bdev_name": "Malloc1", 00:15:17.415 "name": "Malloc1", 00:15:17.415 "nguid": "D16EEAA9D437419392AB5608A88304CA", 00:15:17.415 "uuid": "d16eeaa9-d437-4193-92ab-5608a88304ca" 00:15:17.415 }, 00:15:17.415 { 00:15:17.415 "nsid": 2, 00:15:17.415 "bdev_name": "Malloc3", 00:15:17.415 "name": "Malloc3", 00:15:17.415 "nguid": "17D8EA1227CB49CBA86F7033F516D336", 00:15:17.415 "uuid": "17d8ea12-27cb-49cb-a86f-7033f516d336" 00:15:17.415 } 00:15:17.415 ] 00:15:17.415 }, 00:15:17.415 { 00:15:17.415 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.415 "subtype": "NVMe", 00:15:17.415 "listen_addresses": [ 00:15:17.415 { 00:15:17.415 "trtype": "VFIOUSER", 00:15:17.415 "adrfam": "IPv4", 00:15:17.415 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.415 "trsvcid": "0" 00:15:17.415 } 00:15:17.415 ], 00:15:17.415 "allow_any_host": true, 00:15:17.415 "hosts": [], 00:15:17.415 "serial_number": "SPDK2", 00:15:17.415 "model_number": "SPDK bdev Controller", 00:15:17.415 "max_namespaces": 32, 00:15:17.415 "min_cntlid": 1, 00:15:17.415 "max_cntlid": 65519, 00:15:17.415 "namespaces": [ 00:15:17.415 { 00:15:17.415 "nsid": 1, 00:15:17.415 "bdev_name": "Malloc2", 00:15:17.415 "name": "Malloc2", 00:15:17.415 "nguid": "A453DC636EF54E8B8FF44AC916514B5C", 00:15:17.415 "uuid": "a453dc63-6ef5-4e8b-8ff4-4ac916514b5c" 00:15:17.415 } 00:15:17.415 ] 00:15:17.415 } 00:15:17.415 ] 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3213489 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.415 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.416 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:17.416 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:17.416 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:17.676 Malloc4 00:15:17.676 [2024-11-06 10:57:08.886155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.676 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:17.676 [2024-11-06 10:57:09.048266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.676 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.938 Asynchronous Event Request test 00:15:17.938 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.938 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.938 Registering asynchronous event callbacks... 00:15:17.938 Starting namespace attribute notice tests for all controllers... 00:15:17.938 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:17.938 aer_cb - Changed Namespace 00:15:17.938 Cleaning up... 00:15:17.938 [ 00:15:17.938 { 00:15:17.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.938 "subtype": "Discovery", 00:15:17.938 "listen_addresses": [], 00:15:17.938 "allow_any_host": true, 00:15:17.938 "hosts": [] 00:15:17.938 }, 00:15:17.938 { 00:15:17.938 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.938 "subtype": "NVMe", 00:15:17.938 "listen_addresses": [ 00:15:17.938 { 00:15:17.938 "trtype": "VFIOUSER", 00:15:17.938 "adrfam": "IPv4", 00:15:17.938 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.938 "trsvcid": "0" 00:15:17.938 } 00:15:17.938 ], 00:15:17.938 "allow_any_host": true, 00:15:17.938 "hosts": [], 00:15:17.938 "serial_number": "SPDK1", 00:15:17.938 "model_number": "SPDK bdev Controller", 00:15:17.938 "max_namespaces": 32, 00:15:17.938 "min_cntlid": 1, 00:15:17.938 "max_cntlid": 65519, 00:15:17.938 "namespaces": [ 00:15:17.938 { 00:15:17.938 "nsid": 1, 00:15:17.938 "bdev_name": "Malloc1", 00:15:17.938 "name": "Malloc1", 00:15:17.938 "nguid": "D16EEAA9D437419392AB5608A88304CA", 00:15:17.938 "uuid": "d16eeaa9-d437-4193-92ab-5608a88304ca" 00:15:17.938 }, 00:15:17.938 { 00:15:17.938 "nsid": 2, 00:15:17.938 "bdev_name": "Malloc3", 00:15:17.938 "name": "Malloc3", 00:15:17.938 "nguid": "17D8EA1227CB49CBA86F7033F516D336", 00:15:17.938 "uuid": "17d8ea12-27cb-49cb-a86f-7033f516d336" 00:15:17.938 } 00:15:17.938 ] 00:15:17.938 }, 00:15:17.938 { 00:15:17.938 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.938 "subtype": "NVMe", 00:15:17.938 "listen_addresses": [ 00:15:17.938 { 00:15:17.938 "trtype": "VFIOUSER", 00:15:17.938 "adrfam": "IPv4", 00:15:17.938 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.938 "trsvcid": "0" 00:15:17.938 } 00:15:17.938 ], 00:15:17.938 "allow_any_host": true, 00:15:17.938 "hosts": [], 00:15:17.938 "serial_number": "SPDK2", 00:15:17.938 "model_number": "SPDK bdev Controller", 00:15:17.938 "max_namespaces": 32, 00:15:17.938 "min_cntlid": 1, 00:15:17.938 "max_cntlid": 65519, 00:15:17.938 "namespaces": [ 00:15:17.938 { 00:15:17.938 "nsid": 1, 00:15:17.938 "bdev_name": "Malloc2", 00:15:17.938 "name": "Malloc2", 00:15:17.938 "nguid": "A453DC636EF54E8B8FF44AC916514B5C", 00:15:17.938 "uuid": "a453dc63-6ef5-4e8b-8ff4-4ac916514b5c" 00:15:17.938 }, 00:15:17.938 { 00:15:17.938 "nsid": 2, 00:15:17.938 "bdev_name": "Malloc4", 00:15:17.938 "name": "Malloc4", 00:15:17.938 "nguid": "29C2F081D70C4B17BCB80DBDD875F72F", 00:15:17.938 "uuid": "29c2f081-d70c-4b17-bcb8-0dbdd875f72f" 00:15:17.938 } 00:15:17.938 ] 00:15:17.938 } 00:15:17.938 ] 00:15:17.938 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3213489 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3204277 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3204277 ']' 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3204277 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3204277 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3204277' 00:15:17.939 killing process with pid 3204277 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3204277 00:15:17.939 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3204277 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3213742 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3213742' 00:15:18.200 Process pid: 3213742 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3213742 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3213742 ']' 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:18.200 10:57:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:18.200 [2024-11-06 10:57:09.549095] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:18.200 [2024-11-06 10:57:09.550001] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:15:18.200 [2024-11-06 10:57:09.550043] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.461 [2024-11-06 10:57:09.624329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.461 [2024-11-06 10:57:09.659558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.461 [2024-11-06 10:57:09.659594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.461 [2024-11-06 10:57:09.659603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.461 [2024-11-06 10:57:09.659609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.461 [2024-11-06 10:57:09.659615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.461 [2024-11-06 10:57:09.661319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.461 [2024-11-06 10:57:09.661435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.461 [2024-11-06 10:57:09.661594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.461 [2024-11-06 10:57:09.661595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.461 [2024-11-06 10:57:09.716772] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:18.461 [2024-11-06 10:57:09.716894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:18.461 [2024-11-06 10:57:09.717954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:18.461 [2024-11-06 10:57:09.718940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:18.461 [2024-11-06 10:57:09.719020] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:19.032 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:19.032 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:19.032 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:19.974 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:20.235 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:20.235 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:20.235 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.235 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:20.235 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:20.496 Malloc1 00:15:20.496 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:20.756 10:57:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:20.756 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:21.016 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.016 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:21.016 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:21.276 Malloc2 00:15:21.276 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:21.537 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:21.537 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3213742 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3213742 ']' 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3213742 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3213742 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3213742' 00:15:21.797 killing process with pid 3213742 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3213742 00:15:21.797 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3213742 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:22.098 00:15:22.098 real 0m51.394s 00:15:22.098 user 3m16.884s 00:15:22.098 sys 0m2.701s 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:22.098 ************************************ 00:15:22.098 END TEST nvmf_vfio_user 00:15:22.098 ************************************ 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:22.098 ************************************ 00:15:22.098 START TEST nvmf_vfio_user_nvme_compliance 00:15:22.098 ************************************ 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:22.098 * Looking for test storage... 00:15:22.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:22.098 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:22.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.454 --rc genhtml_branch_coverage=1 00:15:22.454 --rc genhtml_function_coverage=1 00:15:22.454 --rc genhtml_legend=1 00:15:22.454 --rc geninfo_all_blocks=1 00:15:22.454 --rc geninfo_unexecuted_blocks=1 00:15:22.454 00:15:22.454 ' 00:15:22.454 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:22.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.454 --rc genhtml_branch_coverage=1 00:15:22.454 --rc genhtml_function_coverage=1 00:15:22.454 --rc genhtml_legend=1 00:15:22.454 --rc geninfo_all_blocks=1 00:15:22.454 --rc geninfo_unexecuted_blocks=1 00:15:22.454 00:15:22.454 ' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:22.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.455 --rc genhtml_branch_coverage=1 00:15:22.455 --rc genhtml_function_coverage=1 00:15:22.455 --rc genhtml_legend=1 00:15:22.455 --rc geninfo_all_blocks=1 00:15:22.455 --rc geninfo_unexecuted_blocks=1 00:15:22.455 00:15:22.455 ' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:22.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.455 --rc genhtml_branch_coverage=1 00:15:22.455 --rc genhtml_function_coverage=1 00:15:22.455 --rc genhtml_legend=1 00:15:22.455 --rc geninfo_all_blocks=1 00:15:22.455 --rc geninfo_unexecuted_blocks=1 00:15:22.455 00:15:22.455 ' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:22.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3214993 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3214993' 00:15:22.455 Process pid: 3214993 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3214993 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3214993 ']' 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.455 10:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.455 [2024-11-06 10:57:13.662254] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:15:22.455 [2024-11-06 10:57:13.662334] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.455 [2024-11-06 10:57:13.739143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.455 [2024-11-06 10:57:13.780312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.455 [2024-11-06 10:57:13.780351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.455 [2024-11-06 10:57:13.780359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.455 [2024-11-06 10:57:13.780365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.455 [2024-11-06 10:57:13.780371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.455 [2024-11-06 10:57:13.781808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.455 [2024-11-06 10:57:13.782102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.455 [2024-11-06 10:57:13.782106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.397 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.397 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:23.397 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.339 malloc0 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.339 10:57:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:24.339 00:15:24.339 00:15:24.339 CUnit - A unit testing framework for C - Version 2.1-3 00:15:24.339 http://cunit.sourceforge.net/ 00:15:24.339 00:15:24.339 00:15:24.339 Suite: nvme_compliance 00:15:24.339 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 10:57:15.747205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.339 [2024-11-06 10:57:15.748576] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:24.339 [2024-11-06 10:57:15.748588] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:24.339 [2024-11-06 10:57:15.748593] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:24.339 [2024-11-06 10:57:15.750226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.599 passed 00:15:24.599 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 10:57:15.845828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.599 [2024-11-06 10:57:15.848844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.599 passed 00:15:24.599 Test: admin_identify_ns ...[2024-11-06 10:57:15.945989] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.599 [2024-11-06 10:57:16.005758] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:24.599 [2024-11-06 10:57:16.013757] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:24.859 [2024-11-06 10:57:16.034864] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.859 passed 00:15:24.859 Test: admin_get_features_mandatory_features ...[2024-11-06 10:57:16.128867] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.859 [2024-11-06 10:57:16.131884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.859 passed 00:15:24.859 Test: admin_get_features_optional_features ...[2024-11-06 10:57:16.225428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.859 [2024-11-06 10:57:16.228450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.859 passed 00:15:25.119 Test: admin_set_features_number_of_queues ...[2024-11-06 10:57:16.321594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.119 [2024-11-06 10:57:16.425861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.119 passed 00:15:25.119 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 10:57:16.517859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.120 [2024-11-06 10:57:16.520884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.380 passed 00:15:25.380 Test: admin_get_log_page_with_lpo ...[2024-11-06 10:57:16.616013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.380 [2024-11-06 10:57:16.683758] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:25.380 [2024-11-06 10:57:16.696805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.380 passed 00:15:25.380 Test: fabric_property_get ...[2024-11-06 10:57:16.788428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.380 [2024-11-06 10:57:16.789680] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:25.380 [2024-11-06 10:57:16.791448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.640 passed 00:15:25.640 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 10:57:16.886027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.640 [2024-11-06 10:57:16.887279] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:25.640 [2024-11-06 10:57:16.889048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.640 passed 00:15:25.640 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 10:57:16.983207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.899 [2024-11-06 10:57:17.066757] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:25.899 [2024-11-06 10:57:17.082762] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:25.899 [2024-11-06 10:57:17.087836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.899 passed 00:15:25.899 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 10:57:17.179641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.899 [2024-11-06 10:57:17.180886] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:25.899 [2024-11-06 10:57:17.182656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:25.899 passed 00:15:25.899 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 10:57:17.275758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.160 [2024-11-06 10:57:17.353752] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:26.160 [2024-11-06 10:57:17.377752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:26.160 [2024-11-06 10:57:17.382833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.160 passed 00:15:26.160 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 10:57:17.474401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.160 [2024-11-06 10:57:17.475644] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:26.160 [2024-11-06 10:57:17.475665] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:26.160 [2024-11-06 10:57:17.477420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.160 passed 00:15:26.160 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 10:57:17.568494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.419 [2024-11-06 10:57:17.663751] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:26.419 [2024-11-06 10:57:17.671753] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:26.419 [2024-11-06 10:57:17.679756] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:26.419 [2024-11-06 10:57:17.686753] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:26.419 [2024-11-06 10:57:17.716832] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.419 passed 00:15:26.419 Test: admin_create_io_sq_verify_pc ...[2024-11-06 10:57:17.806399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.419 [2024-11-06 10:57:17.821762] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:26.419 [2024-11-06 10:57:17.839544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.679 passed 00:15:26.679 Test: admin_create_io_qp_max_qps ...[2024-11-06 10:57:17.933079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.061 [2024-11-06 10:57:19.045756] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:28.061 [2024-11-06 10:57:19.426837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.061 passed 00:15:28.328 Test: admin_create_io_sq_shared_cq ...[2024-11-06 10:57:19.517975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.328 [2024-11-06 10:57:19.649751] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:28.328 [2024-11-06 10:57:19.686814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.328 passed 00:15:28.328 00:15:28.328 Run Summary: Type Total Ran Passed Failed Inactive 00:15:28.328 suites 1 1 n/a 0 0 00:15:28.328 tests 18 18 18 0 0 00:15:28.328 asserts 360 360 360 0 n/a 00:15:28.328 00:15:28.328 Elapsed time = 1.653 seconds 00:15:28.328 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3214993 00:15:28.328 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3214993 ']' 00:15:28.328 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3214993 00:15:28.328 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3214993 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3214993' 00:15:28.591 killing process with pid 3214993 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3214993 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3214993 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:28.591 00:15:28.591 real 0m6.575s 00:15:28.591 user 0m18.702s 00:15:28.591 sys 0m0.519s 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.591 ************************************ 00:15:28.591 END TEST nvmf_vfio_user_nvme_compliance 00:15:28.591 ************************************ 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.591 10:57:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.852 ************************************ 00:15:28.852 START TEST nvmf_vfio_user_fuzz 00:15:28.852 ************************************ 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:28.852 * Looking for test storage... 00:15:28.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.852 --rc genhtml_branch_coverage=1 00:15:28.852 --rc genhtml_function_coverage=1 00:15:28.852 --rc genhtml_legend=1 00:15:28.852 --rc geninfo_all_blocks=1 00:15:28.852 --rc geninfo_unexecuted_blocks=1 00:15:28.852 00:15:28.852 ' 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.852 --rc genhtml_branch_coverage=1 00:15:28.852 --rc genhtml_function_coverage=1 00:15:28.852 --rc genhtml_legend=1 00:15:28.852 --rc geninfo_all_blocks=1 00:15:28.852 --rc geninfo_unexecuted_blocks=1 00:15:28.852 00:15:28.852 ' 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.852 --rc genhtml_branch_coverage=1 00:15:28.852 --rc genhtml_function_coverage=1 00:15:28.852 --rc genhtml_legend=1 00:15:28.852 --rc geninfo_all_blocks=1 00:15:28.852 --rc geninfo_unexecuted_blocks=1 00:15:28.852 00:15:28.852 ' 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.852 --rc genhtml_branch_coverage=1 00:15:28.852 --rc genhtml_function_coverage=1 00:15:28.852 --rc genhtml_legend=1 00:15:28.852 --rc geninfo_all_blocks=1 00:15:28.852 --rc geninfo_unexecuted_blocks=1 00:15:28.852 00:15:28.852 ' 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.852 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3216262 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3216262' 00:15:28.853 Process pid: 3216262 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3216262 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3216262 ']' 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.853 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:29.113 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.113 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:29.113 10:57:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.497 malloc0 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:30.497 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:02.605 Fuzzing completed. Shutting down the fuzz application 00:16:02.605 00:16:02.605 Dumping successful admin opcodes: 00:16:02.605 8, 9, 10, 24, 00:16:02.605 Dumping successful io opcodes: 00:16:02.605 0, 00:16:02.605 NS: 0x20000081ef00 I/O qp, Total commands completed: 1135261, total successful commands: 4470, random_seed: 4253333376 00:16:02.605 NS: 0x20000081ef00 admin qp, Total commands completed: 142748, total successful commands: 1160, random_seed: 3233640448 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3216262 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3216262 ']' 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3216262 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3216262 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:02.605 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3216262' 00:16:02.605 killing process with pid 3216262 00:16:02.606 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3216262 00:16:02.606 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3216262 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:02.606 00:16:02.606 real 0m33.138s 00:16:02.606 user 0m37.588s 00:16:02.606 sys 0m26.002s 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.606 ************************************ 00:16:02.606 END TEST nvmf_vfio_user_fuzz 00:16:02.606 ************************************ 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.606 ************************************ 00:16:02.606 START TEST nvmf_auth_target 00:16:02.606 ************************************ 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:02.606 * Looking for test storage... 00:16:02.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.606 --rc genhtml_branch_coverage=1 00:16:02.606 --rc genhtml_function_coverage=1 00:16:02.606 --rc genhtml_legend=1 00:16:02.606 --rc geninfo_all_blocks=1 00:16:02.606 --rc geninfo_unexecuted_blocks=1 00:16:02.606 00:16:02.606 ' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.606 --rc genhtml_branch_coverage=1 00:16:02.606 --rc genhtml_function_coverage=1 00:16:02.606 --rc genhtml_legend=1 00:16:02.606 --rc geninfo_all_blocks=1 00:16:02.606 --rc geninfo_unexecuted_blocks=1 00:16:02.606 00:16:02.606 ' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.606 --rc genhtml_branch_coverage=1 00:16:02.606 --rc genhtml_function_coverage=1 00:16:02.606 --rc genhtml_legend=1 00:16:02.606 --rc geninfo_all_blocks=1 00:16:02.606 --rc geninfo_unexecuted_blocks=1 00:16:02.606 00:16:02.606 ' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:02.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.606 --rc genhtml_branch_coverage=1 00:16:02.606 --rc genhtml_function_coverage=1 00:16:02.606 --rc genhtml_legend=1 00:16:02.606 --rc geninfo_all_blocks=1 00:16:02.606 --rc geninfo_unexecuted_blocks=1 00:16:02.606 00:16:02.606 ' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.606 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:02.607 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:09.195 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:09.195 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:09.195 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:09.195 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:09.195 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:09.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:16:09.456 00:16:09.456 --- 10.0.0.2 ping statistics --- 00:16:09.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.456 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:16:09.456 00:16:09.456 --- 10.0.0.1 ping statistics --- 00:16:09.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.456 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3226435 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3226435 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3226435 ']' 00:16:09.456 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.457 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:09.457 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.457 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:09.457 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3226455 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b101c0dfbae8208f0a4d6a5172eb2d5241833e555815fcb 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.liT 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b101c0dfbae8208f0a4d6a5172eb2d5241833e555815fcb 0 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b101c0dfbae8208f0a4d6a5172eb2d5241833e555815fcb 0 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b101c0dfbae8208f0a4d6a5172eb2d5241833e555815fcb 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:09.718 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:09.979 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.liT 00:16:09.979 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.liT 00:16:09.979 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.liT 00:16:09.979 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:09.979 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a4690175c233c1ca4c31523bd6bd2e73a8fccc270472515198a0f290348f3aee 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IQn 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a4690175c233c1ca4c31523bd6bd2e73a8fccc270472515198a0f290348f3aee 3 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a4690175c233c1ca4c31523bd6bd2e73a8fccc270472515198a0f290348f3aee 3 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a4690175c233c1ca4c31523bd6bd2e73a8fccc270472515198a0f290348f3aee 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IQn 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IQn 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.IQn 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1254ba068cc45acbc82e39a2f7641411 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rDC 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1254ba068cc45acbc82e39a2f7641411 1 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1254ba068cc45acbc82e39a2f7641411 1 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1254ba068cc45acbc82e39a2f7641411 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rDC 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rDC 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.rDC 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b4e6e6af2ac466e1299a48948777a2464550ebe27beffcea 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pDV 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b4e6e6af2ac466e1299a48948777a2464550ebe27beffcea 2 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b4e6e6af2ac466e1299a48948777a2464550ebe27beffcea 2 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b4e6e6af2ac466e1299a48948777a2464550ebe27beffcea 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pDV 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pDV 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.pDV 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a52fee89ed43d36a7cc321ae57e94acc1e3cca216486595 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.amB 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a52fee89ed43d36a7cc321ae57e94acc1e3cca216486595 2 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a52fee89ed43d36a7cc321ae57e94acc1e3cca216486595 2 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a52fee89ed43d36a7cc321ae57e94acc1e3cca216486595 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:09.980 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.amB 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.amB 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.amB 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2e690dac66d72b1bbb3f924f7bca378a 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Fga 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2e690dac66d72b1bbb3f924f7bca378a 1 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2e690dac66d72b1bbb3f924f7bca378a 1 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2e690dac66d72b1bbb3f924f7bca378a 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Fga 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Fga 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Fga 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=11142f81a8ff8a571e5289f27b7e273110e2dcb5ebb0afc2d208d2255322155b 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tXH 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 11142f81a8ff8a571e5289f27b7e273110e2dcb5ebb0afc2d208d2255322155b 3 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 11142f81a8ff8a571e5289f27b7e273110e2dcb5ebb0afc2d208d2255322155b 3 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=11142f81a8ff8a571e5289f27b7e273110e2dcb5ebb0afc2d208d2255322155b 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tXH 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tXH 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tXH 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3226435 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3226435 ']' 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:10.242 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3226455 /var/tmp/host.sock 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3226455 ']' 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:10.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.505 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:10.505 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.liT 00:16:10.505 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.505 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.505 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.505 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.liT 00:16:10.505 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.liT 00:16:10.765 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.IQn ]] 00:16:10.765 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IQn 00:16:10.765 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.765 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.765 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IQn 00:16:10.765 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IQn 00:16:11.025 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:11.025 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rDC 00:16:11.025 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.025 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.025 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.025 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rDC 00:16:11.025 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rDC 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.pDV ]] 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pDV 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pDV 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pDV 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.amB 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.amB 00:16:11.286 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.amB 00:16:11.546 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Fga ]] 00:16:11.546 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fga 00:16:11.546 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.546 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.546 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.546 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fga 00:16:11.546 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fga 00:16:11.808 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:11.808 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tXH 00:16:11.808 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.808 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.808 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.808 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tXH 00:16:11.808 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tXH 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.069 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.330 00:16:12.330 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.330 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.330 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.591 { 00:16:12.591 "cntlid": 1, 00:16:12.591 "qid": 0, 00:16:12.591 "state": "enabled", 00:16:12.591 "thread": "nvmf_tgt_poll_group_000", 00:16:12.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:12.591 "listen_address": { 00:16:12.591 "trtype": "TCP", 00:16:12.591 "adrfam": "IPv4", 00:16:12.591 "traddr": "10.0.0.2", 00:16:12.591 "trsvcid": "4420" 00:16:12.591 }, 00:16:12.591 "peer_address": { 00:16:12.591 "trtype": "TCP", 00:16:12.591 "adrfam": "IPv4", 00:16:12.591 "traddr": "10.0.0.1", 00:16:12.591 "trsvcid": "53816" 00:16:12.591 }, 00:16:12.591 "auth": { 00:16:12.591 "state": "completed", 00:16:12.591 "digest": "sha256", 00:16:12.591 "dhgroup": "null" 00:16:12.591 } 00:16:12.591 } 00:16:12.591 ]' 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.591 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.852 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:12.852 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:13.795 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.795 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.795 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.795 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.795 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.795 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.795 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.795 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.059 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.059 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.320 { 00:16:14.320 "cntlid": 3, 00:16:14.320 "qid": 0, 00:16:14.320 "state": "enabled", 00:16:14.320 "thread": "nvmf_tgt_poll_group_000", 00:16:14.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:14.320 "listen_address": { 00:16:14.320 "trtype": "TCP", 00:16:14.320 "adrfam": "IPv4", 00:16:14.320 "traddr": "10.0.0.2", 00:16:14.320 "trsvcid": "4420" 00:16:14.320 }, 00:16:14.320 "peer_address": { 00:16:14.320 "trtype": "TCP", 00:16:14.320 "adrfam": "IPv4", 00:16:14.320 "traddr": "10.0.0.1", 00:16:14.320 "trsvcid": "53850" 00:16:14.320 }, 00:16:14.320 "auth": { 00:16:14.320 "state": "completed", 00:16:14.320 "digest": "sha256", 00:16:14.320 "dhgroup": "null" 00:16:14.320 } 00:16:14.320 } 00:16:14.320 ]' 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.320 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.581 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.581 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.581 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.581 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.581 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.581 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:14.581 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.525 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.526 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.526 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.526 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.526 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.526 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.526 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.786 00:16:15.786 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.786 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.786 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.047 { 00:16:16.047 "cntlid": 5, 00:16:16.047 "qid": 0, 00:16:16.047 "state": "enabled", 00:16:16.047 "thread": "nvmf_tgt_poll_group_000", 00:16:16.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:16.047 "listen_address": { 00:16:16.047 "trtype": "TCP", 00:16:16.047 "adrfam": "IPv4", 00:16:16.047 "traddr": "10.0.0.2", 00:16:16.047 "trsvcid": "4420" 00:16:16.047 }, 00:16:16.047 "peer_address": { 00:16:16.047 "trtype": "TCP", 00:16:16.047 "adrfam": "IPv4", 00:16:16.047 "traddr": "10.0.0.1", 00:16:16.047 "trsvcid": "53876" 00:16:16.047 }, 00:16:16.047 "auth": { 00:16:16.047 "state": "completed", 00:16:16.047 "digest": "sha256", 00:16:16.047 "dhgroup": "null" 00:16:16.047 } 00:16:16.047 } 00:16:16.047 ]' 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.047 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.308 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.308 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.308 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.308 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:16.308 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.250 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.511 00:16:17.511 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.511 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.511 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.772 { 00:16:17.772 "cntlid": 7, 00:16:17.772 "qid": 0, 00:16:17.772 "state": "enabled", 00:16:17.772 "thread": "nvmf_tgt_poll_group_000", 00:16:17.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:17.772 "listen_address": { 00:16:17.772 "trtype": "TCP", 00:16:17.772 "adrfam": "IPv4", 00:16:17.772 "traddr": "10.0.0.2", 00:16:17.772 "trsvcid": "4420" 00:16:17.772 }, 00:16:17.772 "peer_address": { 00:16:17.772 "trtype": "TCP", 00:16:17.772 "adrfam": "IPv4", 00:16:17.772 "traddr": "10.0.0.1", 00:16:17.772 "trsvcid": "51836" 00:16:17.772 }, 00:16:17.772 "auth": { 00:16:17.772 "state": "completed", 00:16:17.772 "digest": "sha256", 00:16:17.772 "dhgroup": "null" 00:16:17.772 } 00:16:17.772 } 00:16:17.772 ]' 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.772 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.032 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:18.032 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.975 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.236 00:16:19.236 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.236 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.236 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.497 { 00:16:19.497 "cntlid": 9, 00:16:19.497 "qid": 0, 00:16:19.497 "state": "enabled", 00:16:19.497 "thread": "nvmf_tgt_poll_group_000", 00:16:19.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:19.497 "listen_address": { 00:16:19.497 "trtype": "TCP", 00:16:19.497 "adrfam": "IPv4", 00:16:19.497 "traddr": "10.0.0.2", 00:16:19.497 "trsvcid": "4420" 00:16:19.497 }, 00:16:19.497 "peer_address": { 00:16:19.497 "trtype": "TCP", 00:16:19.497 "adrfam": "IPv4", 00:16:19.497 "traddr": "10.0.0.1", 00:16:19.497 "trsvcid": "51864" 00:16:19.497 }, 00:16:19.497 "auth": { 00:16:19.497 "state": "completed", 00:16:19.497 "digest": "sha256", 00:16:19.497 "dhgroup": "ffdhe2048" 00:16:19.497 } 00:16:19.497 } 00:16:19.497 ]' 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.497 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.758 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:19.758 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.701 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.702 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.963 00:16:20.963 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.963 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.963 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.224 { 00:16:21.224 "cntlid": 11, 00:16:21.224 "qid": 0, 00:16:21.224 "state": "enabled", 00:16:21.224 "thread": "nvmf_tgt_poll_group_000", 00:16:21.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:21.224 "listen_address": { 00:16:21.224 "trtype": "TCP", 00:16:21.224 "adrfam": "IPv4", 00:16:21.224 "traddr": "10.0.0.2", 00:16:21.224 "trsvcid": "4420" 00:16:21.224 }, 00:16:21.224 "peer_address": { 00:16:21.224 "trtype": "TCP", 00:16:21.224 "adrfam": "IPv4", 00:16:21.224 "traddr": "10.0.0.1", 00:16:21.224 "trsvcid": "51878" 00:16:21.224 }, 00:16:21.224 "auth": { 00:16:21.224 "state": "completed", 00:16:21.224 "digest": "sha256", 00:16:21.224 "dhgroup": "ffdhe2048" 00:16:21.224 } 00:16:21.224 } 00:16:21.224 ]' 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.224 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.485 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:21.485 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:22.056 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.317 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.578 00:16:22.578 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.578 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.578 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.838 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.838 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.838 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.838 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.838 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.838 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.838 { 00:16:22.838 "cntlid": 13, 00:16:22.838 "qid": 0, 00:16:22.838 "state": "enabled", 00:16:22.838 "thread": "nvmf_tgt_poll_group_000", 00:16:22.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.838 "listen_address": { 00:16:22.838 "trtype": "TCP", 00:16:22.838 "adrfam": "IPv4", 00:16:22.838 "traddr": "10.0.0.2", 00:16:22.838 "trsvcid": "4420" 00:16:22.838 }, 00:16:22.838 "peer_address": { 00:16:22.838 "trtype": "TCP", 00:16:22.838 "adrfam": "IPv4", 00:16:22.838 "traddr": "10.0.0.1", 00:16:22.838 "trsvcid": "51898" 00:16:22.838 }, 00:16:22.838 "auth": { 00:16:22.838 "state": "completed", 00:16:22.838 "digest": "sha256", 00:16:22.838 "dhgroup": "ffdhe2048" 00:16:22.838 } 00:16:22.838 } 00:16:22.838 ]' 00:16:22.838 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.839 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.839 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.839 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.839 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.839 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.839 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.839 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.100 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:23.100 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.043 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.304 00:16:24.304 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.304 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.304 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.565 { 00:16:24.565 "cntlid": 15, 00:16:24.565 "qid": 0, 00:16:24.565 "state": "enabled", 00:16:24.565 "thread": "nvmf_tgt_poll_group_000", 00:16:24.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:24.565 "listen_address": { 00:16:24.565 "trtype": "TCP", 00:16:24.565 "adrfam": "IPv4", 00:16:24.565 "traddr": "10.0.0.2", 00:16:24.565 "trsvcid": "4420" 00:16:24.565 }, 00:16:24.565 "peer_address": { 00:16:24.565 "trtype": "TCP", 00:16:24.565 "adrfam": "IPv4", 00:16:24.565 "traddr": "10.0.0.1", 00:16:24.565 "trsvcid": "51934" 00:16:24.565 }, 00:16:24.565 "auth": { 00:16:24.565 "state": "completed", 00:16:24.565 "digest": "sha256", 00:16:24.565 "dhgroup": "ffdhe2048" 00:16:24.565 } 00:16:24.565 } 00:16:24.565 ]' 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.565 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.828 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:24.828 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.770 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.770 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.031 00:16:26.031 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.031 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.031 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.293 { 00:16:26.293 "cntlid": 17, 00:16:26.293 "qid": 0, 00:16:26.293 "state": "enabled", 00:16:26.293 "thread": "nvmf_tgt_poll_group_000", 00:16:26.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:26.293 "listen_address": { 00:16:26.293 "trtype": "TCP", 00:16:26.293 "adrfam": "IPv4", 00:16:26.293 "traddr": "10.0.0.2", 00:16:26.293 "trsvcid": "4420" 00:16:26.293 }, 00:16:26.293 "peer_address": { 00:16:26.293 "trtype": "TCP", 00:16:26.293 "adrfam": "IPv4", 00:16:26.293 "traddr": "10.0.0.1", 00:16:26.293 "trsvcid": "51962" 00:16:26.293 }, 00:16:26.293 "auth": { 00:16:26.293 "state": "completed", 00:16:26.293 "digest": "sha256", 00:16:26.293 "dhgroup": "ffdhe3072" 00:16:26.293 } 00:16:26.293 } 00:16:26.293 ]' 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.293 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.555 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:26.555 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.498 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.759 00:16:27.759 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.759 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.759 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.019 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.019 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.019 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.019 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.019 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.019 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.019 { 00:16:28.019 "cntlid": 19, 00:16:28.019 "qid": 0, 00:16:28.019 "state": "enabled", 00:16:28.019 "thread": "nvmf_tgt_poll_group_000", 00:16:28.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.020 "listen_address": { 00:16:28.020 "trtype": "TCP", 00:16:28.020 "adrfam": "IPv4", 00:16:28.020 "traddr": "10.0.0.2", 00:16:28.020 "trsvcid": "4420" 00:16:28.020 }, 00:16:28.020 "peer_address": { 00:16:28.020 "trtype": "TCP", 00:16:28.020 "adrfam": "IPv4", 00:16:28.020 "traddr": "10.0.0.1", 00:16:28.020 "trsvcid": "40014" 00:16:28.020 }, 00:16:28.020 "auth": { 00:16:28.020 "state": "completed", 00:16:28.020 "digest": "sha256", 00:16:28.020 "dhgroup": "ffdhe3072" 00:16:28.020 } 00:16:28.020 } 00:16:28.020 ]' 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.020 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.280 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:28.280 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.220 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.481 00:16:29.481 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.481 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.481 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.742 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.742 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.742 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.742 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.742 { 00:16:29.742 "cntlid": 21, 00:16:29.742 "qid": 0, 00:16:29.742 "state": "enabled", 00:16:29.742 "thread": "nvmf_tgt_poll_group_000", 00:16:29.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:29.742 "listen_address": { 00:16:29.742 "trtype": "TCP", 00:16:29.742 "adrfam": "IPv4", 00:16:29.742 "traddr": "10.0.0.2", 00:16:29.742 "trsvcid": "4420" 00:16:29.742 }, 00:16:29.742 "peer_address": { 00:16:29.742 "trtype": "TCP", 00:16:29.742 "adrfam": "IPv4", 00:16:29.742 "traddr": "10.0.0.1", 00:16:29.742 "trsvcid": "40042" 00:16:29.742 }, 00:16:29.742 "auth": { 00:16:29.742 "state": "completed", 00:16:29.742 "digest": "sha256", 00:16:29.742 "dhgroup": "ffdhe3072" 00:16:29.742 } 00:16:29.742 } 00:16:29.742 ]' 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.742 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.002 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:30.002 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.944 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.205 00:16:31.205 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.205 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.205 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.466 { 00:16:31.466 "cntlid": 23, 00:16:31.466 "qid": 0, 00:16:31.466 "state": "enabled", 00:16:31.466 "thread": "nvmf_tgt_poll_group_000", 00:16:31.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.466 "listen_address": { 00:16:31.466 "trtype": "TCP", 00:16:31.466 "adrfam": "IPv4", 00:16:31.466 "traddr": "10.0.0.2", 00:16:31.466 "trsvcid": "4420" 00:16:31.466 }, 00:16:31.466 "peer_address": { 00:16:31.466 "trtype": "TCP", 00:16:31.466 "adrfam": "IPv4", 00:16:31.466 "traddr": "10.0.0.1", 00:16:31.466 "trsvcid": "40072" 00:16:31.466 }, 00:16:31.466 "auth": { 00:16:31.466 "state": "completed", 00:16:31.466 "digest": "sha256", 00:16:31.466 "dhgroup": "ffdhe3072" 00:16:31.466 } 00:16:31.466 } 00:16:31.466 ]' 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.466 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.727 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:31.727 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.706 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.707 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.707 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.707 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.004 00:16:33.004 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.004 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.004 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.265 { 00:16:33.265 "cntlid": 25, 00:16:33.265 "qid": 0, 00:16:33.265 "state": "enabled", 00:16:33.265 "thread": "nvmf_tgt_poll_group_000", 00:16:33.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.265 "listen_address": { 00:16:33.265 "trtype": "TCP", 00:16:33.265 "adrfam": "IPv4", 00:16:33.265 "traddr": "10.0.0.2", 00:16:33.265 "trsvcid": "4420" 00:16:33.265 }, 00:16:33.265 "peer_address": { 00:16:33.265 "trtype": "TCP", 00:16:33.265 "adrfam": "IPv4", 00:16:33.265 "traddr": "10.0.0.1", 00:16:33.265 "trsvcid": "40094" 00:16:33.265 }, 00:16:33.265 "auth": { 00:16:33.265 "state": "completed", 00:16:33.265 "digest": "sha256", 00:16:33.265 "dhgroup": "ffdhe4096" 00:16:33.265 } 00:16:33.265 } 00:16:33.265 ]' 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.265 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.526 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:33.526 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.097 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.357 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.618 00:16:34.618 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.618 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.618 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.879 { 00:16:34.879 "cntlid": 27, 00:16:34.879 "qid": 0, 00:16:34.879 "state": "enabled", 00:16:34.879 "thread": "nvmf_tgt_poll_group_000", 00:16:34.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.879 "listen_address": { 00:16:34.879 "trtype": "TCP", 00:16:34.879 "adrfam": "IPv4", 00:16:34.879 "traddr": "10.0.0.2", 00:16:34.879 "trsvcid": "4420" 00:16:34.879 }, 00:16:34.879 "peer_address": { 00:16:34.879 "trtype": "TCP", 00:16:34.879 "adrfam": "IPv4", 00:16:34.879 "traddr": "10.0.0.1", 00:16:34.879 "trsvcid": "40130" 00:16:34.879 }, 00:16:34.879 "auth": { 00:16:34.879 "state": "completed", 00:16:34.879 "digest": "sha256", 00:16:34.879 "dhgroup": "ffdhe4096" 00:16:34.879 } 00:16:34.879 } 00:16:34.879 ]' 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.879 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.140 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:35.140 10:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.080 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.081 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.081 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.341 00:16:36.341 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.341 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.341 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.601 { 00:16:36.601 "cntlid": 29, 00:16:36.601 "qid": 0, 00:16:36.601 "state": "enabled", 00:16:36.601 "thread": "nvmf_tgt_poll_group_000", 00:16:36.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.601 "listen_address": { 00:16:36.601 "trtype": "TCP", 00:16:36.601 "adrfam": "IPv4", 00:16:36.601 "traddr": "10.0.0.2", 00:16:36.601 "trsvcid": "4420" 00:16:36.601 }, 00:16:36.601 "peer_address": { 00:16:36.601 "trtype": "TCP", 00:16:36.601 "adrfam": "IPv4", 00:16:36.601 "traddr": "10.0.0.1", 00:16:36.601 "trsvcid": "40164" 00:16:36.601 }, 00:16:36.601 "auth": { 00:16:36.601 "state": "completed", 00:16:36.601 "digest": "sha256", 00:16:36.601 "dhgroup": "ffdhe4096" 00:16:36.601 } 00:16:36.601 } 00:16:36.601 ]' 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.601 10:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.601 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.601 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.601 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.862 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:36.862 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.804 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.804 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.065 00:16:38.065 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.065 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.065 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.327 { 00:16:38.327 "cntlid": 31, 00:16:38.327 "qid": 0, 00:16:38.327 "state": "enabled", 00:16:38.327 "thread": "nvmf_tgt_poll_group_000", 00:16:38.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.327 "listen_address": { 00:16:38.327 "trtype": "TCP", 00:16:38.327 "adrfam": "IPv4", 00:16:38.327 "traddr": "10.0.0.2", 00:16:38.327 "trsvcid": "4420" 00:16:38.327 }, 00:16:38.327 "peer_address": { 00:16:38.327 "trtype": "TCP", 00:16:38.327 "adrfam": "IPv4", 00:16:38.327 "traddr": "10.0.0.1", 00:16:38.327 "trsvcid": "33798" 00:16:38.327 }, 00:16:38.327 "auth": { 00:16:38.327 "state": "completed", 00:16:38.327 "digest": "sha256", 00:16:38.327 "dhgroup": "ffdhe4096" 00:16:38.327 } 00:16:38.327 } 00:16:38.327 ]' 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.327 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.328 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.328 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.593 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.593 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.593 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.593 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:38.593 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:39.534 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.534 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.534 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.534 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.535 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.104 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.104 { 00:16:40.104 "cntlid": 33, 00:16:40.104 "qid": 0, 00:16:40.104 "state": "enabled", 00:16:40.104 "thread": "nvmf_tgt_poll_group_000", 00:16:40.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.104 "listen_address": { 00:16:40.104 "trtype": "TCP", 00:16:40.104 "adrfam": "IPv4", 00:16:40.104 "traddr": "10.0.0.2", 00:16:40.104 "trsvcid": "4420" 00:16:40.104 }, 00:16:40.104 "peer_address": { 00:16:40.104 "trtype": "TCP", 00:16:40.104 "adrfam": "IPv4", 00:16:40.104 "traddr": "10.0.0.1", 00:16:40.104 "trsvcid": "33828" 00:16:40.104 }, 00:16:40.104 "auth": { 00:16:40.104 "state": "completed", 00:16:40.104 "digest": "sha256", 00:16:40.104 "dhgroup": "ffdhe6144" 00:16:40.104 } 00:16:40.104 } 00:16:40.104 ]' 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.104 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.363 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.363 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.363 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.364 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.364 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.364 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:40.364 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:41.300 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.300 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.301 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.301 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.301 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.301 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.301 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.301 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.561 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.821 00:16:41.821 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.821 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.821 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.082 { 00:16:42.082 "cntlid": 35, 00:16:42.082 "qid": 0, 00:16:42.082 "state": "enabled", 00:16:42.082 "thread": "nvmf_tgt_poll_group_000", 00:16:42.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.082 "listen_address": { 00:16:42.082 "trtype": "TCP", 00:16:42.082 "adrfam": "IPv4", 00:16:42.082 "traddr": "10.0.0.2", 00:16:42.082 "trsvcid": "4420" 00:16:42.082 }, 00:16:42.082 "peer_address": { 00:16:42.082 "trtype": "TCP", 00:16:42.082 "adrfam": "IPv4", 00:16:42.082 "traddr": "10.0.0.1", 00:16:42.082 "trsvcid": "33850" 00:16:42.082 }, 00:16:42.082 "auth": { 00:16:42.082 "state": "completed", 00:16:42.082 "digest": "sha256", 00:16:42.082 "dhgroup": "ffdhe6144" 00:16:42.082 } 00:16:42.082 } 00:16:42.082 ]' 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.082 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.343 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:42.343 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:42.913 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.175 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.176 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.176 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.746 00:16:43.746 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.746 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.746 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.746 { 00:16:43.746 "cntlid": 37, 00:16:43.746 "qid": 0, 00:16:43.746 "state": "enabled", 00:16:43.746 "thread": "nvmf_tgt_poll_group_000", 00:16:43.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.746 "listen_address": { 00:16:43.746 "trtype": "TCP", 00:16:43.746 "adrfam": "IPv4", 00:16:43.746 "traddr": "10.0.0.2", 00:16:43.746 "trsvcid": "4420" 00:16:43.746 }, 00:16:43.746 "peer_address": { 00:16:43.746 "trtype": "TCP", 00:16:43.746 "adrfam": "IPv4", 00:16:43.746 "traddr": "10.0.0.1", 00:16:43.746 "trsvcid": "33876" 00:16:43.746 }, 00:16:43.746 "auth": { 00:16:43.746 "state": "completed", 00:16:43.746 "digest": "sha256", 00:16:43.746 "dhgroup": "ffdhe6144" 00:16:43.746 } 00:16:43.746 } 00:16:43.746 ]' 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.746 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.007 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.007 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.007 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.007 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.007 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.007 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:44.007 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.949 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.520 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.520 { 00:16:45.520 "cntlid": 39, 00:16:45.520 "qid": 0, 00:16:45.520 "state": "enabled", 00:16:45.520 "thread": "nvmf_tgt_poll_group_000", 00:16:45.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.520 "listen_address": { 00:16:45.520 "trtype": "TCP", 00:16:45.520 "adrfam": "IPv4", 00:16:45.520 "traddr": "10.0.0.2", 00:16:45.520 "trsvcid": "4420" 00:16:45.520 }, 00:16:45.520 "peer_address": { 00:16:45.520 "trtype": "TCP", 00:16:45.520 "adrfam": "IPv4", 00:16:45.520 "traddr": "10.0.0.1", 00:16:45.520 "trsvcid": "33902" 00:16:45.520 }, 00:16:45.520 "auth": { 00:16:45.520 "state": "completed", 00:16:45.520 "digest": "sha256", 00:16:45.520 "dhgroup": "ffdhe6144" 00:16:45.520 } 00:16:45.520 } 00:16:45.520 ]' 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.520 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.780 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.780 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.780 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.780 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.780 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.780 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:45.780 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.722 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.722 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.293 00:16:47.293 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.293 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.293 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.554 { 00:16:47.554 "cntlid": 41, 00:16:47.554 "qid": 0, 00:16:47.554 "state": "enabled", 00:16:47.554 "thread": "nvmf_tgt_poll_group_000", 00:16:47.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.554 "listen_address": { 00:16:47.554 "trtype": "TCP", 00:16:47.554 "adrfam": "IPv4", 00:16:47.554 "traddr": "10.0.0.2", 00:16:47.554 "trsvcid": "4420" 00:16:47.554 }, 00:16:47.554 "peer_address": { 00:16:47.554 "trtype": "TCP", 00:16:47.554 "adrfam": "IPv4", 00:16:47.554 "traddr": "10.0.0.1", 00:16:47.554 "trsvcid": "45634" 00:16:47.554 }, 00:16:47.554 "auth": { 00:16:47.554 "state": "completed", 00:16:47.554 "digest": "sha256", 00:16:47.554 "dhgroup": "ffdhe8192" 00:16:47.554 } 00:16:47.554 } 00:16:47.554 ]' 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.554 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.813 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:47.813 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.762 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.762 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.332 00:16:49.332 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.332 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.332 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.593 { 00:16:49.593 "cntlid": 43, 00:16:49.593 "qid": 0, 00:16:49.593 "state": "enabled", 00:16:49.593 "thread": "nvmf_tgt_poll_group_000", 00:16:49.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.593 "listen_address": { 00:16:49.593 "trtype": "TCP", 00:16:49.593 "adrfam": "IPv4", 00:16:49.593 "traddr": "10.0.0.2", 00:16:49.593 "trsvcid": "4420" 00:16:49.593 }, 00:16:49.593 "peer_address": { 00:16:49.593 "trtype": "TCP", 00:16:49.593 "adrfam": "IPv4", 00:16:49.593 "traddr": "10.0.0.1", 00:16:49.593 "trsvcid": "45672" 00:16:49.593 }, 00:16:49.593 "auth": { 00:16:49.593 "state": "completed", 00:16:49.593 "digest": "sha256", 00:16:49.593 "dhgroup": "ffdhe8192" 00:16:49.593 } 00:16:49.593 } 00:16:49.593 ]' 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.593 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.853 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:49.853 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:50.423 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.424 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.424 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.424 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.424 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.424 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.424 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.424 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.685 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.255 00:16:51.255 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.255 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.255 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.516 { 00:16:51.516 "cntlid": 45, 00:16:51.516 "qid": 0, 00:16:51.516 "state": "enabled", 00:16:51.516 "thread": "nvmf_tgt_poll_group_000", 00:16:51.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.516 "listen_address": { 00:16:51.516 "trtype": "TCP", 00:16:51.516 "adrfam": "IPv4", 00:16:51.516 "traddr": "10.0.0.2", 00:16:51.516 "trsvcid": "4420" 00:16:51.516 }, 00:16:51.516 "peer_address": { 00:16:51.516 "trtype": "TCP", 00:16:51.516 "adrfam": "IPv4", 00:16:51.516 "traddr": "10.0.0.1", 00:16:51.516 "trsvcid": "45702" 00:16:51.516 }, 00:16:51.516 "auth": { 00:16:51.516 "state": "completed", 00:16:51.516 "digest": "sha256", 00:16:51.516 "dhgroup": "ffdhe8192" 00:16:51.516 } 00:16:51.516 } 00:16:51.516 ]' 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.516 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.775 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:51.775 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.716 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.716 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.716 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.716 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.716 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.286 00:16:53.286 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.286 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.286 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.547 { 00:16:53.547 "cntlid": 47, 00:16:53.547 "qid": 0, 00:16:53.547 "state": "enabled", 00:16:53.547 "thread": "nvmf_tgt_poll_group_000", 00:16:53.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.547 "listen_address": { 00:16:53.547 "trtype": "TCP", 00:16:53.547 "adrfam": "IPv4", 00:16:53.547 "traddr": "10.0.0.2", 00:16:53.547 "trsvcid": "4420" 00:16:53.547 }, 00:16:53.547 "peer_address": { 00:16:53.547 "trtype": "TCP", 00:16:53.547 "adrfam": "IPv4", 00:16:53.547 "traddr": "10.0.0.1", 00:16:53.547 "trsvcid": "45734" 00:16:53.547 }, 00:16:53.547 "auth": { 00:16:53.547 "state": "completed", 00:16:53.547 "digest": "sha256", 00:16:53.547 "dhgroup": "ffdhe8192" 00:16:53.547 } 00:16:53.547 } 00:16:53.547 ]' 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.547 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.807 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:53.807 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.379 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.639 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:54.639 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.639 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.639 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.640 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.900 00:16:54.900 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.900 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.900 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.161 { 00:16:55.161 "cntlid": 49, 00:16:55.161 "qid": 0, 00:16:55.161 "state": "enabled", 00:16:55.161 "thread": "nvmf_tgt_poll_group_000", 00:16:55.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.161 "listen_address": { 00:16:55.161 "trtype": "TCP", 00:16:55.161 "adrfam": "IPv4", 00:16:55.161 "traddr": "10.0.0.2", 00:16:55.161 "trsvcid": "4420" 00:16:55.161 }, 00:16:55.161 "peer_address": { 00:16:55.161 "trtype": "TCP", 00:16:55.161 "adrfam": "IPv4", 00:16:55.161 "traddr": "10.0.0.1", 00:16:55.161 "trsvcid": "45752" 00:16:55.161 }, 00:16:55.161 "auth": { 00:16:55.161 "state": "completed", 00:16:55.161 "digest": "sha384", 00:16:55.161 "dhgroup": "null" 00:16:55.161 } 00:16:55.161 } 00:16:55.161 ]' 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.161 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.421 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:55.421 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.624 00:16:56.624 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.624 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.624 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.885 { 00:16:56.885 "cntlid": 51, 00:16:56.885 "qid": 0, 00:16:56.885 "state": "enabled", 00:16:56.885 "thread": "nvmf_tgt_poll_group_000", 00:16:56.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.885 "listen_address": { 00:16:56.885 "trtype": "TCP", 00:16:56.885 "adrfam": "IPv4", 00:16:56.885 "traddr": "10.0.0.2", 00:16:56.885 "trsvcid": "4420" 00:16:56.885 }, 00:16:56.885 "peer_address": { 00:16:56.885 "trtype": "TCP", 00:16:56.885 "adrfam": "IPv4", 00:16:56.885 "traddr": "10.0.0.1", 00:16:56.885 "trsvcid": "45784" 00:16:56.885 }, 00:16:56.885 "auth": { 00:16:56.885 "state": "completed", 00:16:56.885 "digest": "sha384", 00:16:56.885 "dhgroup": "null" 00:16:56.885 } 00:16:56.885 } 00:16:56.885 ]' 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.885 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.146 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:57.146 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.088 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.089 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.089 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.089 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.089 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.089 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.089 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.350 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.350 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.611 { 00:16:58.611 "cntlid": 53, 00:16:58.611 "qid": 0, 00:16:58.611 "state": "enabled", 00:16:58.611 "thread": "nvmf_tgt_poll_group_000", 00:16:58.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.611 "listen_address": { 00:16:58.611 "trtype": "TCP", 00:16:58.611 "adrfam": "IPv4", 00:16:58.611 "traddr": "10.0.0.2", 00:16:58.611 "trsvcid": "4420" 00:16:58.611 }, 00:16:58.611 "peer_address": { 00:16:58.611 "trtype": "TCP", 00:16:58.611 "adrfam": "IPv4", 00:16:58.611 "traddr": "10.0.0.1", 00:16:58.611 "trsvcid": "35240" 00:16:58.611 }, 00:16:58.611 "auth": { 00:16:58.611 "state": "completed", 00:16:58.611 "digest": "sha384", 00:16:58.611 "dhgroup": "null" 00:16:58.611 } 00:16:58.611 } 00:16:58.611 ]' 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.611 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.872 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:58.872 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:16:59.444 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.444 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.444 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.444 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.705 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.705 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.705 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.705 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:59.705 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.706 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.706 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.706 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.706 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.706 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.967 00:16:59.967 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.967 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.967 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.228 { 00:17:00.228 "cntlid": 55, 00:17:00.228 "qid": 0, 00:17:00.228 "state": "enabled", 00:17:00.228 "thread": "nvmf_tgt_poll_group_000", 00:17:00.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.228 "listen_address": { 00:17:00.228 "trtype": "TCP", 00:17:00.228 "adrfam": "IPv4", 00:17:00.228 "traddr": "10.0.0.2", 00:17:00.228 "trsvcid": "4420" 00:17:00.228 }, 00:17:00.228 "peer_address": { 00:17:00.228 "trtype": "TCP", 00:17:00.228 "adrfam": "IPv4", 00:17:00.228 "traddr": "10.0.0.1", 00:17:00.228 "trsvcid": "35254" 00:17:00.228 }, 00:17:00.228 "auth": { 00:17:00.228 "state": "completed", 00:17:00.228 "digest": "sha384", 00:17:00.228 "dhgroup": "null" 00:17:00.228 } 00:17:00.228 } 00:17:00.228 ]' 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.228 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.488 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:00.489 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.431 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.692 00:17:01.692 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.692 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.692 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.953 { 00:17:01.953 "cntlid": 57, 00:17:01.953 "qid": 0, 00:17:01.953 "state": "enabled", 00:17:01.953 "thread": "nvmf_tgt_poll_group_000", 00:17:01.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.953 "listen_address": { 00:17:01.953 "trtype": "TCP", 00:17:01.953 "adrfam": "IPv4", 00:17:01.953 "traddr": "10.0.0.2", 00:17:01.953 "trsvcid": "4420" 00:17:01.953 }, 00:17:01.953 "peer_address": { 00:17:01.953 "trtype": "TCP", 00:17:01.953 "adrfam": "IPv4", 00:17:01.953 "traddr": "10.0.0.1", 00:17:01.953 "trsvcid": "35280" 00:17:01.953 }, 00:17:01.953 "auth": { 00:17:01.953 "state": "completed", 00:17:01.953 "digest": "sha384", 00:17:01.953 "dhgroup": "ffdhe2048" 00:17:01.953 } 00:17:01.953 } 00:17:01.953 ]' 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.953 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.214 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:02.214 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.155 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.416 00:17:03.416 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.416 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.416 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.416 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.416 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.416 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.416 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.677 { 00:17:03.677 "cntlid": 59, 00:17:03.677 "qid": 0, 00:17:03.677 "state": "enabled", 00:17:03.677 "thread": "nvmf_tgt_poll_group_000", 00:17:03.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.677 "listen_address": { 00:17:03.677 "trtype": "TCP", 00:17:03.677 "adrfam": "IPv4", 00:17:03.677 "traddr": "10.0.0.2", 00:17:03.677 "trsvcid": "4420" 00:17:03.677 }, 00:17:03.677 "peer_address": { 00:17:03.677 "trtype": "TCP", 00:17:03.677 "adrfam": "IPv4", 00:17:03.677 "traddr": "10.0.0.1", 00:17:03.677 "trsvcid": "35308" 00:17:03.677 }, 00:17:03.677 "auth": { 00:17:03.677 "state": "completed", 00:17:03.677 "digest": "sha384", 00:17:03.677 "dhgroup": "ffdhe2048" 00:17:03.677 } 00:17:03.677 } 00:17:03.677 ]' 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.677 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.938 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:03.938 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:04.508 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.770 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.770 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.770 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.770 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.770 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.770 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.770 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.770 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.031 00:17:05.031 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.031 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.031 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.292 { 00:17:05.292 "cntlid": 61, 00:17:05.292 "qid": 0, 00:17:05.292 "state": "enabled", 00:17:05.292 "thread": "nvmf_tgt_poll_group_000", 00:17:05.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.292 "listen_address": { 00:17:05.292 "trtype": "TCP", 00:17:05.292 "adrfam": "IPv4", 00:17:05.292 "traddr": "10.0.0.2", 00:17:05.292 "trsvcid": "4420" 00:17:05.292 }, 00:17:05.292 "peer_address": { 00:17:05.292 "trtype": "TCP", 00:17:05.292 "adrfam": "IPv4", 00:17:05.292 "traddr": "10.0.0.1", 00:17:05.292 "trsvcid": "35346" 00:17:05.292 }, 00:17:05.292 "auth": { 00:17:05.292 "state": "completed", 00:17:05.292 "digest": "sha384", 00:17:05.292 "dhgroup": "ffdhe2048" 00:17:05.292 } 00:17:05.292 } 00:17:05.292 ]' 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.292 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.554 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:05.554 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.496 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.756 00:17:06.756 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.756 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.756 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.016 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.016 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.016 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.016 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.017 { 00:17:07.017 "cntlid": 63, 00:17:07.017 "qid": 0, 00:17:07.017 "state": "enabled", 00:17:07.017 "thread": "nvmf_tgt_poll_group_000", 00:17:07.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.017 "listen_address": { 00:17:07.017 "trtype": "TCP", 00:17:07.017 "adrfam": "IPv4", 00:17:07.017 "traddr": "10.0.0.2", 00:17:07.017 "trsvcid": "4420" 00:17:07.017 }, 00:17:07.017 "peer_address": { 00:17:07.017 "trtype": "TCP", 00:17:07.017 "adrfam": "IPv4", 00:17:07.017 "traddr": "10.0.0.1", 00:17:07.017 "trsvcid": "35374" 00:17:07.017 }, 00:17:07.017 "auth": { 00:17:07.017 "state": "completed", 00:17:07.017 "digest": "sha384", 00:17:07.017 "dhgroup": "ffdhe2048" 00:17:07.017 } 00:17:07.017 } 00:17:07.017 ]' 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.017 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.276 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:07.276 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.217 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.218 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.218 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.218 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.218 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.218 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.478 00:17:08.478 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.478 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.478 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.739 { 00:17:08.739 "cntlid": 65, 00:17:08.739 "qid": 0, 00:17:08.739 "state": "enabled", 00:17:08.739 "thread": "nvmf_tgt_poll_group_000", 00:17:08.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.739 "listen_address": { 00:17:08.739 "trtype": "TCP", 00:17:08.739 "adrfam": "IPv4", 00:17:08.739 "traddr": "10.0.0.2", 00:17:08.739 "trsvcid": "4420" 00:17:08.739 }, 00:17:08.739 "peer_address": { 00:17:08.739 "trtype": "TCP", 00:17:08.739 "adrfam": "IPv4", 00:17:08.739 "traddr": "10.0.0.1", 00:17:08.739 "trsvcid": "59166" 00:17:08.739 }, 00:17:08.739 "auth": { 00:17:08.739 "state": "completed", 00:17:08.739 "digest": "sha384", 00:17:08.739 "dhgroup": "ffdhe3072" 00:17:08.739 } 00:17:08.739 } 00:17:08.739 ]' 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.739 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.001 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.001 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.001 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.001 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:09.001 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.944 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.204 00:17:10.204 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.204 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.204 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.464 { 00:17:10.464 "cntlid": 67, 00:17:10.464 "qid": 0, 00:17:10.464 "state": "enabled", 00:17:10.464 "thread": "nvmf_tgt_poll_group_000", 00:17:10.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.464 "listen_address": { 00:17:10.464 "trtype": "TCP", 00:17:10.464 "adrfam": "IPv4", 00:17:10.464 "traddr": "10.0.0.2", 00:17:10.464 "trsvcid": "4420" 00:17:10.464 }, 00:17:10.464 "peer_address": { 00:17:10.464 "trtype": "TCP", 00:17:10.464 "adrfam": "IPv4", 00:17:10.464 "traddr": "10.0.0.1", 00:17:10.464 "trsvcid": "59202" 00:17:10.464 }, 00:17:10.464 "auth": { 00:17:10.464 "state": "completed", 00:17:10.464 "digest": "sha384", 00:17:10.464 "dhgroup": "ffdhe3072" 00:17:10.464 } 00:17:10.464 } 00:17:10.464 ]' 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.464 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.725 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.725 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.725 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.725 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.725 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.725 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:10.725 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.667 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.667 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.927 00:17:11.927 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.927 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.927 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.188 { 00:17:12.188 "cntlid": 69, 00:17:12.188 "qid": 0, 00:17:12.188 "state": "enabled", 00:17:12.188 "thread": "nvmf_tgt_poll_group_000", 00:17:12.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.188 "listen_address": { 00:17:12.188 "trtype": "TCP", 00:17:12.188 "adrfam": "IPv4", 00:17:12.188 "traddr": "10.0.0.2", 00:17:12.188 "trsvcid": "4420" 00:17:12.188 }, 00:17:12.188 "peer_address": { 00:17:12.188 "trtype": "TCP", 00:17:12.188 "adrfam": "IPv4", 00:17:12.188 "traddr": "10.0.0.1", 00:17:12.188 "trsvcid": "59228" 00:17:12.188 }, 00:17:12.188 "auth": { 00:17:12.188 "state": "completed", 00:17:12.188 "digest": "sha384", 00:17:12.188 "dhgroup": "ffdhe3072" 00:17:12.188 } 00:17:12.188 } 00:17:12.188 ]' 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.188 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.491 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:12.491 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.127 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.396 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:13.396 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.396 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.396 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.396 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.396 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.397 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:13.397 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.397 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.397 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.397 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.397 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.397 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.658 00:17:13.658 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.658 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.658 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.919 { 00:17:13.919 "cntlid": 71, 00:17:13.919 "qid": 0, 00:17:13.919 "state": "enabled", 00:17:13.919 "thread": "nvmf_tgt_poll_group_000", 00:17:13.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.919 "listen_address": { 00:17:13.919 "trtype": "TCP", 00:17:13.919 "adrfam": "IPv4", 00:17:13.919 "traddr": "10.0.0.2", 00:17:13.919 "trsvcid": "4420" 00:17:13.919 }, 00:17:13.919 "peer_address": { 00:17:13.919 "trtype": "TCP", 00:17:13.919 "adrfam": "IPv4", 00:17:13.919 "traddr": "10.0.0.1", 00:17:13.919 "trsvcid": "59254" 00:17:13.919 }, 00:17:13.919 "auth": { 00:17:13.919 "state": "completed", 00:17:13.919 "digest": "sha384", 00:17:13.919 "dhgroup": "ffdhe3072" 00:17:13.919 } 00:17:13.919 } 00:17:13.919 ]' 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.919 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.178 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:14.179 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:14.748 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.009 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.271 00:17:15.271 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.271 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.271 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.532 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.532 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.532 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.532 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.532 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.533 { 00:17:15.533 "cntlid": 73, 00:17:15.533 "qid": 0, 00:17:15.533 "state": "enabled", 00:17:15.533 "thread": "nvmf_tgt_poll_group_000", 00:17:15.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.533 "listen_address": { 00:17:15.533 "trtype": "TCP", 00:17:15.533 "adrfam": "IPv4", 00:17:15.533 "traddr": "10.0.0.2", 00:17:15.533 "trsvcid": "4420" 00:17:15.533 }, 00:17:15.533 "peer_address": { 00:17:15.533 "trtype": "TCP", 00:17:15.533 "adrfam": "IPv4", 00:17:15.533 "traddr": "10.0.0.1", 00:17:15.533 "trsvcid": "59288" 00:17:15.533 }, 00:17:15.533 "auth": { 00:17:15.533 "state": "completed", 00:17:15.533 "digest": "sha384", 00:17:15.533 "dhgroup": "ffdhe4096" 00:17:15.533 } 00:17:15.533 } 00:17:15.533 ]' 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.533 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.793 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:15.793 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.737 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.737 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.999 00:17:16.999 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.999 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.999 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.261 { 00:17:17.261 "cntlid": 75, 00:17:17.261 "qid": 0, 00:17:17.261 "state": "enabled", 00:17:17.261 "thread": "nvmf_tgt_poll_group_000", 00:17:17.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.261 "listen_address": { 00:17:17.261 "trtype": "TCP", 00:17:17.261 "adrfam": "IPv4", 00:17:17.261 "traddr": "10.0.0.2", 00:17:17.261 "trsvcid": "4420" 00:17:17.261 }, 00:17:17.261 "peer_address": { 00:17:17.261 "trtype": "TCP", 00:17:17.261 "adrfam": "IPv4", 00:17:17.261 "traddr": "10.0.0.1", 00:17:17.261 "trsvcid": "52600" 00:17:17.261 }, 00:17:17.261 "auth": { 00:17:17.261 "state": "completed", 00:17:17.261 "digest": "sha384", 00:17:17.261 "dhgroup": "ffdhe4096" 00:17:17.261 } 00:17:17.261 } 00:17:17.261 ]' 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.261 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.523 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:17.523 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.465 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.725 00:17:18.725 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.725 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.725 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.986 { 00:17:18.986 "cntlid": 77, 00:17:18.986 "qid": 0, 00:17:18.986 "state": "enabled", 00:17:18.986 "thread": "nvmf_tgt_poll_group_000", 00:17:18.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.986 "listen_address": { 00:17:18.986 "trtype": "TCP", 00:17:18.986 "adrfam": "IPv4", 00:17:18.986 "traddr": "10.0.0.2", 00:17:18.986 "trsvcid": "4420" 00:17:18.986 }, 00:17:18.986 "peer_address": { 00:17:18.986 "trtype": "TCP", 00:17:18.986 "adrfam": "IPv4", 00:17:18.986 "traddr": "10.0.0.1", 00:17:18.986 "trsvcid": "52618" 00:17:18.986 }, 00:17:18.986 "auth": { 00:17:18.986 "state": "completed", 00:17:18.986 "digest": "sha384", 00:17:18.986 "dhgroup": "ffdhe4096" 00:17:18.986 } 00:17:18.986 } 00:17:18.986 ]' 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.986 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.248 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.248 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.248 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.248 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:19.248 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.191 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.452 00:17:20.452 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.452 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.452 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.713 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.713 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.713 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.713 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.713 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.713 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.713 { 00:17:20.713 "cntlid": 79, 00:17:20.713 "qid": 0, 00:17:20.713 "state": "enabled", 00:17:20.713 "thread": "nvmf_tgt_poll_group_000", 00:17:20.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.713 "listen_address": { 00:17:20.713 "trtype": "TCP", 00:17:20.713 "adrfam": "IPv4", 00:17:20.713 "traddr": "10.0.0.2", 00:17:20.713 "trsvcid": "4420" 00:17:20.713 }, 00:17:20.713 "peer_address": { 00:17:20.713 "trtype": "TCP", 00:17:20.713 "adrfam": "IPv4", 00:17:20.714 "traddr": "10.0.0.1", 00:17:20.714 "trsvcid": "52654" 00:17:20.714 }, 00:17:20.714 "auth": { 00:17:20.714 "state": "completed", 00:17:20.714 "digest": "sha384", 00:17:20.714 "dhgroup": "ffdhe4096" 00:17:20.714 } 00:17:20.714 } 00:17:20.714 ]' 00:17:20.714 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.714 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.714 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.714 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.714 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.975 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.975 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.975 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.975 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:20.975 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.917 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.918 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.918 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.918 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.918 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.918 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.918 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.918 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.488 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.488 { 00:17:22.488 "cntlid": 81, 00:17:22.488 "qid": 0, 00:17:22.488 "state": "enabled", 00:17:22.488 "thread": "nvmf_tgt_poll_group_000", 00:17:22.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.488 "listen_address": { 00:17:22.488 "trtype": "TCP", 00:17:22.488 "adrfam": "IPv4", 00:17:22.488 "traddr": "10.0.0.2", 00:17:22.488 "trsvcid": "4420" 00:17:22.488 }, 00:17:22.488 "peer_address": { 00:17:22.488 "trtype": "TCP", 00:17:22.488 "adrfam": "IPv4", 00:17:22.488 "traddr": "10.0.0.1", 00:17:22.488 "trsvcid": "52674" 00:17:22.488 }, 00:17:22.488 "auth": { 00:17:22.488 "state": "completed", 00:17:22.488 "digest": "sha384", 00:17:22.488 "dhgroup": "ffdhe6144" 00:17:22.488 } 00:17:22.488 } 00:17:22.488 ]' 00:17:22.488 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.748 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.748 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.748 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.748 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.748 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.748 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.748 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.009 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:23.009 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.580 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.841 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.101 00:17:24.101 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.101 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.101 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.362 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.362 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.362 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.362 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.363 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.363 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.363 { 00:17:24.363 "cntlid": 83, 00:17:24.363 "qid": 0, 00:17:24.363 "state": "enabled", 00:17:24.363 "thread": "nvmf_tgt_poll_group_000", 00:17:24.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.363 "listen_address": { 00:17:24.363 "trtype": "TCP", 00:17:24.363 "adrfam": "IPv4", 00:17:24.363 "traddr": "10.0.0.2", 00:17:24.363 "trsvcid": "4420" 00:17:24.363 }, 00:17:24.363 "peer_address": { 00:17:24.363 "trtype": "TCP", 00:17:24.363 "adrfam": "IPv4", 00:17:24.363 "traddr": "10.0.0.1", 00:17:24.363 "trsvcid": "52706" 00:17:24.363 }, 00:17:24.363 "auth": { 00:17:24.363 "state": "completed", 00:17:24.363 "digest": "sha384", 00:17:24.363 "dhgroup": "ffdhe6144" 00:17:24.363 } 00:17:24.363 } 00:17:24.363 ]' 00:17:24.363 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.363 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.363 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.623 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.624 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.624 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.624 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.624 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.624 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:24.624 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.565 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.136 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.136 { 00:17:26.136 "cntlid": 85, 00:17:26.136 "qid": 0, 00:17:26.136 "state": "enabled", 00:17:26.136 "thread": "nvmf_tgt_poll_group_000", 00:17:26.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.136 "listen_address": { 00:17:26.136 "trtype": "TCP", 00:17:26.136 "adrfam": "IPv4", 00:17:26.136 "traddr": "10.0.0.2", 00:17:26.136 "trsvcid": "4420" 00:17:26.136 }, 00:17:26.136 "peer_address": { 00:17:26.136 "trtype": "TCP", 00:17:26.136 "adrfam": "IPv4", 00:17:26.136 "traddr": "10.0.0.1", 00:17:26.136 "trsvcid": "52734" 00:17:26.136 }, 00:17:26.136 "auth": { 00:17:26.136 "state": "completed", 00:17:26.136 "digest": "sha384", 00:17:26.136 "dhgroup": "ffdhe6144" 00:17:26.136 } 00:17:26.136 } 00:17:26.136 ]' 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.136 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.397 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.397 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.397 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.397 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.397 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.397 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:26.397 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.340 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.911 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.911 { 00:17:27.911 "cntlid": 87, 00:17:27.911 "qid": 0, 00:17:27.911 "state": "enabled", 00:17:27.911 "thread": "nvmf_tgt_poll_group_000", 00:17:27.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.911 "listen_address": { 00:17:27.911 "trtype": "TCP", 00:17:27.911 "adrfam": "IPv4", 00:17:27.911 "traddr": "10.0.0.2", 00:17:27.911 "trsvcid": "4420" 00:17:27.911 }, 00:17:27.911 "peer_address": { 00:17:27.911 "trtype": "TCP", 00:17:27.911 "adrfam": "IPv4", 00:17:27.911 "traddr": "10.0.0.1", 00:17:27.911 "trsvcid": "60784" 00:17:27.911 }, 00:17:27.911 "auth": { 00:17:27.911 "state": "completed", 00:17:27.911 "digest": "sha384", 00:17:27.911 "dhgroup": "ffdhe6144" 00:17:27.911 } 00:17:27.911 } 00:17:27.911 ]' 00:17:27.911 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.172 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.173 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.173 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.173 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.173 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.173 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.173 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.432 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:28.432 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.004 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.265 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.837 00:17:29.837 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.837 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.837 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.098 { 00:17:30.098 "cntlid": 89, 00:17:30.098 "qid": 0, 00:17:30.098 "state": "enabled", 00:17:30.098 "thread": "nvmf_tgt_poll_group_000", 00:17:30.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.098 "listen_address": { 00:17:30.098 "trtype": "TCP", 00:17:30.098 "adrfam": "IPv4", 00:17:30.098 "traddr": "10.0.0.2", 00:17:30.098 "trsvcid": "4420" 00:17:30.098 }, 00:17:30.098 "peer_address": { 00:17:30.098 "trtype": "TCP", 00:17:30.098 "adrfam": "IPv4", 00:17:30.098 "traddr": "10.0.0.1", 00:17:30.098 "trsvcid": "60810" 00:17:30.098 }, 00:17:30.098 "auth": { 00:17:30.098 "state": "completed", 00:17:30.098 "digest": "sha384", 00:17:30.098 "dhgroup": "ffdhe8192" 00:17:30.098 } 00:17:30.098 } 00:17:30.098 ]' 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.098 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.360 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:30.360 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.303 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.873 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.873 { 00:17:31.873 "cntlid": 91, 00:17:31.873 "qid": 0, 00:17:31.873 "state": "enabled", 00:17:31.873 "thread": "nvmf_tgt_poll_group_000", 00:17:31.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.873 "listen_address": { 00:17:31.873 "trtype": "TCP", 00:17:31.873 "adrfam": "IPv4", 00:17:31.873 "traddr": "10.0.0.2", 00:17:31.873 "trsvcid": "4420" 00:17:31.873 }, 00:17:31.873 "peer_address": { 00:17:31.873 "trtype": "TCP", 00:17:31.873 "adrfam": "IPv4", 00:17:31.873 "traddr": "10.0.0.1", 00:17:31.873 "trsvcid": "60846" 00:17:31.873 }, 00:17:31.873 "auth": { 00:17:31.873 "state": "completed", 00:17:31.873 "digest": "sha384", 00:17:31.873 "dhgroup": "ffdhe8192" 00:17:31.873 } 00:17:31.873 } 00:17:31.873 ]' 00:17:31.873 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.134 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.134 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.134 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.134 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.134 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.134 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.134 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.395 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:32.395 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.968 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.228 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.800 00:17:33.800 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.800 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.800 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.061 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.061 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.061 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.061 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.061 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.061 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.061 { 00:17:34.061 "cntlid": 93, 00:17:34.061 "qid": 0, 00:17:34.061 "state": "enabled", 00:17:34.062 "thread": "nvmf_tgt_poll_group_000", 00:17:34.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.062 "listen_address": { 00:17:34.062 "trtype": "TCP", 00:17:34.062 "adrfam": "IPv4", 00:17:34.062 "traddr": "10.0.0.2", 00:17:34.062 "trsvcid": "4420" 00:17:34.062 }, 00:17:34.062 "peer_address": { 00:17:34.062 "trtype": "TCP", 00:17:34.062 "adrfam": "IPv4", 00:17:34.062 "traddr": "10.0.0.1", 00:17:34.062 "trsvcid": "60864" 00:17:34.062 }, 00:17:34.062 "auth": { 00:17:34.062 "state": "completed", 00:17:34.062 "digest": "sha384", 00:17:34.062 "dhgroup": "ffdhe8192" 00:17:34.062 } 00:17:34.062 } 00:17:34.062 ]' 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.062 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.323 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:34.323 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.266 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.837 00:17:35.837 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.837 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.837 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.098 { 00:17:36.098 "cntlid": 95, 00:17:36.098 "qid": 0, 00:17:36.098 "state": "enabled", 00:17:36.098 "thread": "nvmf_tgt_poll_group_000", 00:17:36.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.098 "listen_address": { 00:17:36.098 "trtype": "TCP", 00:17:36.098 "adrfam": "IPv4", 00:17:36.098 "traddr": "10.0.0.2", 00:17:36.098 "trsvcid": "4420" 00:17:36.098 }, 00:17:36.098 "peer_address": { 00:17:36.098 "trtype": "TCP", 00:17:36.098 "adrfam": "IPv4", 00:17:36.098 "traddr": "10.0.0.1", 00:17:36.098 "trsvcid": "60896" 00:17:36.098 }, 00:17:36.098 "auth": { 00:17:36.098 "state": "completed", 00:17:36.098 "digest": "sha384", 00:17:36.098 "dhgroup": "ffdhe8192" 00:17:36.098 } 00:17:36.098 } 00:17:36.098 ]' 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.098 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.360 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:36.360 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:36.933 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.933 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.193 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.194 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.194 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.194 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.194 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.194 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.194 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.454 00:17:37.454 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.454 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.454 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.714 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.714 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.714 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.714 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.714 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.714 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.714 { 00:17:37.714 "cntlid": 97, 00:17:37.714 "qid": 0, 00:17:37.714 "state": "enabled", 00:17:37.714 "thread": "nvmf_tgt_poll_group_000", 00:17:37.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.714 "listen_address": { 00:17:37.714 "trtype": "TCP", 00:17:37.714 "adrfam": "IPv4", 00:17:37.714 "traddr": "10.0.0.2", 00:17:37.714 "trsvcid": "4420" 00:17:37.714 }, 00:17:37.714 "peer_address": { 00:17:37.714 "trtype": "TCP", 00:17:37.714 "adrfam": "IPv4", 00:17:37.714 "traddr": "10.0.0.1", 00:17:37.714 "trsvcid": "45360" 00:17:37.714 }, 00:17:37.714 "auth": { 00:17:37.714 "state": "completed", 00:17:37.714 "digest": "sha512", 00:17:37.714 "dhgroup": "null" 00:17:37.714 } 00:17:37.714 } 00:17:37.714 ]' 00:17:37.714 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.714 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.714 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.714 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:37.714 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.714 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.714 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.714 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.980 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:37.980 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.984 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.245 00:17:39.245 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.245 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.245 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.245 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.506 { 00:17:39.506 "cntlid": 99, 00:17:39.506 "qid": 0, 00:17:39.506 "state": "enabled", 00:17:39.506 "thread": "nvmf_tgt_poll_group_000", 00:17:39.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.506 "listen_address": { 00:17:39.506 "trtype": "TCP", 00:17:39.506 "adrfam": "IPv4", 00:17:39.506 "traddr": "10.0.0.2", 00:17:39.506 "trsvcid": "4420" 00:17:39.506 }, 00:17:39.506 "peer_address": { 00:17:39.506 "trtype": "TCP", 00:17:39.506 "adrfam": "IPv4", 00:17:39.506 "traddr": "10.0.0.1", 00:17:39.506 "trsvcid": "45384" 00:17:39.506 }, 00:17:39.506 "auth": { 00:17:39.506 "state": "completed", 00:17:39.506 "digest": "sha512", 00:17:39.506 "dhgroup": "null" 00:17:39.506 } 00:17:39.506 } 00:17:39.506 ]' 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.506 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.766 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:39.766 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:40.337 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.337 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.337 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.337 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.598 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.858 00:17:40.858 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.858 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.858 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.119 { 00:17:41.119 "cntlid": 101, 00:17:41.119 "qid": 0, 00:17:41.119 "state": "enabled", 00:17:41.119 "thread": "nvmf_tgt_poll_group_000", 00:17:41.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.119 "listen_address": { 00:17:41.119 "trtype": "TCP", 00:17:41.119 "adrfam": "IPv4", 00:17:41.119 "traddr": "10.0.0.2", 00:17:41.119 "trsvcid": "4420" 00:17:41.119 }, 00:17:41.119 "peer_address": { 00:17:41.119 "trtype": "TCP", 00:17:41.119 "adrfam": "IPv4", 00:17:41.119 "traddr": "10.0.0.1", 00:17:41.119 "trsvcid": "45406" 00:17:41.119 }, 00:17:41.119 "auth": { 00:17:41.119 "state": "completed", 00:17:41.119 "digest": "sha512", 00:17:41.119 "dhgroup": "null" 00:17:41.119 } 00:17:41.119 } 00:17:41.119 ]' 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.119 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.379 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:41.380 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.321 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.581 00:17:42.581 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.581 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.581 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.842 { 00:17:42.842 "cntlid": 103, 00:17:42.842 "qid": 0, 00:17:42.842 "state": "enabled", 00:17:42.842 "thread": "nvmf_tgt_poll_group_000", 00:17:42.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.842 "listen_address": { 00:17:42.842 "trtype": "TCP", 00:17:42.842 "adrfam": "IPv4", 00:17:42.842 "traddr": "10.0.0.2", 00:17:42.842 "trsvcid": "4420" 00:17:42.842 }, 00:17:42.842 "peer_address": { 00:17:42.842 "trtype": "TCP", 00:17:42.842 "adrfam": "IPv4", 00:17:42.842 "traddr": "10.0.0.1", 00:17:42.842 "trsvcid": "45432" 00:17:42.842 }, 00:17:42.842 "auth": { 00:17:42.842 "state": "completed", 00:17:42.842 "digest": "sha512", 00:17:42.842 "dhgroup": "null" 00:17:42.842 } 00:17:42.842 } 00:17:42.842 ]' 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.842 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.102 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:43.102 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.044 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.304 00:17:44.304 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.304 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.304 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.564 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.564 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.564 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.564 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.564 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.564 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.564 { 00:17:44.564 "cntlid": 105, 00:17:44.564 "qid": 0, 00:17:44.564 "state": "enabled", 00:17:44.564 "thread": "nvmf_tgt_poll_group_000", 00:17:44.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.564 "listen_address": { 00:17:44.564 "trtype": "TCP", 00:17:44.564 "adrfam": "IPv4", 00:17:44.564 "traddr": "10.0.0.2", 00:17:44.564 "trsvcid": "4420" 00:17:44.564 }, 00:17:44.564 "peer_address": { 00:17:44.564 "trtype": "TCP", 00:17:44.565 "adrfam": "IPv4", 00:17:44.565 "traddr": "10.0.0.1", 00:17:44.565 "trsvcid": "45454" 00:17:44.565 }, 00:17:44.565 "auth": { 00:17:44.565 "state": "completed", 00:17:44.565 "digest": "sha512", 00:17:44.565 "dhgroup": "ffdhe2048" 00:17:44.565 } 00:17:44.565 } 00:17:44.565 ]' 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.565 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.825 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:44.825 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.768 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.768 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.030 00:17:46.030 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.030 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.030 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.030 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.030 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.030 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.030 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.291 { 00:17:46.291 "cntlid": 107, 00:17:46.291 "qid": 0, 00:17:46.291 "state": "enabled", 00:17:46.291 "thread": "nvmf_tgt_poll_group_000", 00:17:46.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.291 "listen_address": { 00:17:46.291 "trtype": "TCP", 00:17:46.291 "adrfam": "IPv4", 00:17:46.291 "traddr": "10.0.0.2", 00:17:46.291 "trsvcid": "4420" 00:17:46.291 }, 00:17:46.291 "peer_address": { 00:17:46.291 "trtype": "TCP", 00:17:46.291 "adrfam": "IPv4", 00:17:46.291 "traddr": "10.0.0.1", 00:17:46.291 "trsvcid": "45490" 00:17:46.291 }, 00:17:46.291 "auth": { 00:17:46.291 "state": "completed", 00:17:46.291 "digest": "sha512", 00:17:46.291 "dhgroup": "ffdhe2048" 00:17:46.291 } 00:17:46.291 } 00:17:46.291 ]' 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.291 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.552 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:46.552 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.123 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.384 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.385 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.645 00:17:47.645 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.646 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.646 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.906 { 00:17:47.906 "cntlid": 109, 00:17:47.906 "qid": 0, 00:17:47.906 "state": "enabled", 00:17:47.906 "thread": "nvmf_tgt_poll_group_000", 00:17:47.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.906 "listen_address": { 00:17:47.906 "trtype": "TCP", 00:17:47.906 "adrfam": "IPv4", 00:17:47.906 "traddr": "10.0.0.2", 00:17:47.906 "trsvcid": "4420" 00:17:47.906 }, 00:17:47.906 "peer_address": { 00:17:47.906 "trtype": "TCP", 00:17:47.906 "adrfam": "IPv4", 00:17:47.906 "traddr": "10.0.0.1", 00:17:47.906 "trsvcid": "33016" 00:17:47.906 }, 00:17:47.906 "auth": { 00:17:47.906 "state": "completed", 00:17:47.906 "digest": "sha512", 00:17:47.906 "dhgroup": "ffdhe2048" 00:17:47.906 } 00:17:47.906 } 00:17:47.906 ]' 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.906 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.167 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:48.167 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.110 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.111 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.111 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.372 00:17:49.372 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.372 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.372 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.632 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.632 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.632 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.632 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.632 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.632 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.632 { 00:17:49.632 "cntlid": 111, 00:17:49.632 "qid": 0, 00:17:49.632 "state": "enabled", 00:17:49.633 "thread": "nvmf_tgt_poll_group_000", 00:17:49.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.633 "listen_address": { 00:17:49.633 "trtype": "TCP", 00:17:49.633 "adrfam": "IPv4", 00:17:49.633 "traddr": "10.0.0.2", 00:17:49.633 "trsvcid": "4420" 00:17:49.633 }, 00:17:49.633 "peer_address": { 00:17:49.633 "trtype": "TCP", 00:17:49.633 "adrfam": "IPv4", 00:17:49.633 "traddr": "10.0.0.1", 00:17:49.633 "trsvcid": "33042" 00:17:49.633 }, 00:17:49.633 "auth": { 00:17:49.633 "state": "completed", 00:17:49.633 "digest": "sha512", 00:17:49.633 "dhgroup": "ffdhe2048" 00:17:49.633 } 00:17:49.633 } 00:17:49.633 ]' 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.633 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.894 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:49.894 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.466 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.726 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.986 00:17:50.986 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.986 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.986 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.246 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.246 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.246 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.246 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.246 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.246 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.246 { 00:17:51.246 "cntlid": 113, 00:17:51.246 "qid": 0, 00:17:51.247 "state": "enabled", 00:17:51.247 "thread": "nvmf_tgt_poll_group_000", 00:17:51.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.247 "listen_address": { 00:17:51.247 "trtype": "TCP", 00:17:51.247 "adrfam": "IPv4", 00:17:51.247 "traddr": "10.0.0.2", 00:17:51.247 "trsvcid": "4420" 00:17:51.247 }, 00:17:51.247 "peer_address": { 00:17:51.247 "trtype": "TCP", 00:17:51.247 "adrfam": "IPv4", 00:17:51.247 "traddr": "10.0.0.1", 00:17:51.247 "trsvcid": "33072" 00:17:51.247 }, 00:17:51.247 "auth": { 00:17:51.247 "state": "completed", 00:17:51.247 "digest": "sha512", 00:17:51.247 "dhgroup": "ffdhe3072" 00:17:51.247 } 00:17:51.247 } 00:17:51.247 ]' 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.247 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.507 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:51.507 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.449 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.709 00:17:52.709 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.709 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.709 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.970 { 00:17:52.970 "cntlid": 115, 00:17:52.970 "qid": 0, 00:17:52.970 "state": "enabled", 00:17:52.970 "thread": "nvmf_tgt_poll_group_000", 00:17:52.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.970 "listen_address": { 00:17:52.970 "trtype": "TCP", 00:17:52.970 "adrfam": "IPv4", 00:17:52.970 "traddr": "10.0.0.2", 00:17:52.970 "trsvcid": "4420" 00:17:52.970 }, 00:17:52.970 "peer_address": { 00:17:52.970 "trtype": "TCP", 00:17:52.970 "adrfam": "IPv4", 00:17:52.970 "traddr": "10.0.0.1", 00:17:52.970 "trsvcid": "33104" 00:17:52.970 }, 00:17:52.970 "auth": { 00:17:52.970 "state": "completed", 00:17:52.970 "digest": "sha512", 00:17:52.970 "dhgroup": "ffdhe3072" 00:17:52.970 } 00:17:52.970 } 00:17:52.970 ]' 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.970 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.230 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:53.230 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.171 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.431 00:17:54.431 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.431 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.431 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.692 { 00:17:54.692 "cntlid": 117, 00:17:54.692 "qid": 0, 00:17:54.692 "state": "enabled", 00:17:54.692 "thread": "nvmf_tgt_poll_group_000", 00:17:54.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.692 "listen_address": { 00:17:54.692 "trtype": "TCP", 00:17:54.692 "adrfam": "IPv4", 00:17:54.692 "traddr": "10.0.0.2", 00:17:54.692 "trsvcid": "4420" 00:17:54.692 }, 00:17:54.692 "peer_address": { 00:17:54.692 "trtype": "TCP", 00:17:54.692 "adrfam": "IPv4", 00:17:54.692 "traddr": "10.0.0.1", 00:17:54.692 "trsvcid": "33134" 00:17:54.692 }, 00:17:54.692 "auth": { 00:17:54.692 "state": "completed", 00:17:54.692 "digest": "sha512", 00:17:54.692 "dhgroup": "ffdhe3072" 00:17:54.692 } 00:17:54.692 } 00:17:54.692 ]' 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.692 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.692 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.692 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.692 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.692 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.692 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.953 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:54.953 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.524 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.785 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.046 00:17:56.046 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.046 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.046 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.308 { 00:17:56.308 "cntlid": 119, 00:17:56.308 "qid": 0, 00:17:56.308 "state": "enabled", 00:17:56.308 "thread": "nvmf_tgt_poll_group_000", 00:17:56.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.308 "listen_address": { 00:17:56.308 "trtype": "TCP", 00:17:56.308 "adrfam": "IPv4", 00:17:56.308 "traddr": "10.0.0.2", 00:17:56.308 "trsvcid": "4420" 00:17:56.308 }, 00:17:56.308 "peer_address": { 00:17:56.308 "trtype": "TCP", 00:17:56.308 "adrfam": "IPv4", 00:17:56.308 "traddr": "10.0.0.1", 00:17:56.308 "trsvcid": "33168" 00:17:56.308 }, 00:17:56.308 "auth": { 00:17:56.308 "state": "completed", 00:17:56.308 "digest": "sha512", 00:17:56.308 "dhgroup": "ffdhe3072" 00:17:56.308 } 00:17:56.308 } 00:17:56.308 ]' 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.308 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.570 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:56.570 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.513 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.514 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.776 00:17:57.776 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.776 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.776 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.037 { 00:17:58.037 "cntlid": 121, 00:17:58.037 "qid": 0, 00:17:58.037 "state": "enabled", 00:17:58.037 "thread": "nvmf_tgt_poll_group_000", 00:17:58.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.037 "listen_address": { 00:17:58.037 "trtype": "TCP", 00:17:58.037 "adrfam": "IPv4", 00:17:58.037 "traddr": "10.0.0.2", 00:17:58.037 "trsvcid": "4420" 00:17:58.037 }, 00:17:58.037 "peer_address": { 00:17:58.037 "trtype": "TCP", 00:17:58.037 "adrfam": "IPv4", 00:17:58.037 "traddr": "10.0.0.1", 00:17:58.037 "trsvcid": "36538" 00:17:58.037 }, 00:17:58.037 "auth": { 00:17:58.037 "state": "completed", 00:17:58.037 "digest": "sha512", 00:17:58.037 "dhgroup": "ffdhe4096" 00:17:58.037 } 00:17:58.037 } 00:17:58.037 ]' 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.037 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.298 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:58.298 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:17:59.240 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.241 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.502 00:17:59.502 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.502 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.502 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.763 { 00:17:59.763 "cntlid": 123, 00:17:59.763 "qid": 0, 00:17:59.763 "state": "enabled", 00:17:59.763 "thread": "nvmf_tgt_poll_group_000", 00:17:59.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.763 "listen_address": { 00:17:59.763 "trtype": "TCP", 00:17:59.763 "adrfam": "IPv4", 00:17:59.763 "traddr": "10.0.0.2", 00:17:59.763 "trsvcid": "4420" 00:17:59.763 }, 00:17:59.763 "peer_address": { 00:17:59.763 "trtype": "TCP", 00:17:59.763 "adrfam": "IPv4", 00:17:59.763 "traddr": "10.0.0.1", 00:17:59.763 "trsvcid": "36562" 00:17:59.763 }, 00:17:59.763 "auth": { 00:17:59.763 "state": "completed", 00:17:59.763 "digest": "sha512", 00:17:59.763 "dhgroup": "ffdhe4096" 00:17:59.763 } 00:17:59.763 } 00:17:59.763 ]' 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.763 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.023 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.023 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.023 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.023 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:18:00.023 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.965 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.225 00:18:01.225 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.225 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.225 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.486 { 00:18:01.486 "cntlid": 125, 00:18:01.486 "qid": 0, 00:18:01.486 "state": "enabled", 00:18:01.486 "thread": "nvmf_tgt_poll_group_000", 00:18:01.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.486 "listen_address": { 00:18:01.486 "trtype": "TCP", 00:18:01.486 "adrfam": "IPv4", 00:18:01.486 "traddr": "10.0.0.2", 00:18:01.486 "trsvcid": "4420" 00:18:01.486 }, 00:18:01.486 "peer_address": { 00:18:01.486 "trtype": "TCP", 00:18:01.486 "adrfam": "IPv4", 00:18:01.486 "traddr": "10.0.0.1", 00:18:01.486 "trsvcid": "36588" 00:18:01.486 }, 00:18:01.486 "auth": { 00:18:01.486 "state": "completed", 00:18:01.486 "digest": "sha512", 00:18:01.486 "dhgroup": "ffdhe4096" 00:18:01.486 } 00:18:01.486 } 00:18:01.486 ]' 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.486 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.746 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.746 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.746 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.746 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.746 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:18:01.746 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.687 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.687 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.947 00:18:02.947 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.947 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.947 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.209 { 00:18:03.209 "cntlid": 127, 00:18:03.209 "qid": 0, 00:18:03.209 "state": "enabled", 00:18:03.209 "thread": "nvmf_tgt_poll_group_000", 00:18:03.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.209 "listen_address": { 00:18:03.209 "trtype": "TCP", 00:18:03.209 "adrfam": "IPv4", 00:18:03.209 "traddr": "10.0.0.2", 00:18:03.209 "trsvcid": "4420" 00:18:03.209 }, 00:18:03.209 "peer_address": { 00:18:03.209 "trtype": "TCP", 00:18:03.209 "adrfam": "IPv4", 00:18:03.209 "traddr": "10.0.0.1", 00:18:03.209 "trsvcid": "36630" 00:18:03.209 }, 00:18:03.209 "auth": { 00:18:03.209 "state": "completed", 00:18:03.209 "digest": "sha512", 00:18:03.209 "dhgroup": "ffdhe4096" 00:18:03.209 } 00:18:03.209 } 00:18:03.209 ]' 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.209 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.471 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.471 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.471 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.471 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:03.471 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.412 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.983 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.983 { 00:18:04.983 "cntlid": 129, 00:18:04.983 "qid": 0, 00:18:04.983 "state": "enabled", 00:18:04.983 "thread": "nvmf_tgt_poll_group_000", 00:18:04.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.983 "listen_address": { 00:18:04.983 "trtype": "TCP", 00:18:04.983 "adrfam": "IPv4", 00:18:04.983 "traddr": "10.0.0.2", 00:18:04.983 "trsvcid": "4420" 00:18:04.983 }, 00:18:04.983 "peer_address": { 00:18:04.983 "trtype": "TCP", 00:18:04.983 "adrfam": "IPv4", 00:18:04.983 "traddr": "10.0.0.1", 00:18:04.983 "trsvcid": "36650" 00:18:04.983 }, 00:18:04.983 "auth": { 00:18:04.983 "state": "completed", 00:18:04.983 "digest": "sha512", 00:18:04.983 "dhgroup": "ffdhe6144" 00:18:04.983 } 00:18:04.983 } 00:18:04.983 ]' 00:18:04.983 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.243 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.243 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.243 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.243 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.243 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.243 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.243 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.504 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:18:05.504 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.076 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.337 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.598 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.859 { 00:18:06.859 "cntlid": 131, 00:18:06.859 "qid": 0, 00:18:06.859 "state": "enabled", 00:18:06.859 "thread": "nvmf_tgt_poll_group_000", 00:18:06.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.859 "listen_address": { 00:18:06.859 "trtype": "TCP", 00:18:06.859 "adrfam": "IPv4", 00:18:06.859 "traddr": "10.0.0.2", 00:18:06.859 "trsvcid": "4420" 00:18:06.859 }, 00:18:06.859 "peer_address": { 00:18:06.859 "trtype": "TCP", 00:18:06.859 "adrfam": "IPv4", 00:18:06.859 "traddr": "10.0.0.1", 00:18:06.859 "trsvcid": "36678" 00:18:06.859 }, 00:18:06.859 "auth": { 00:18:06.859 "state": "completed", 00:18:06.859 "digest": "sha512", 00:18:06.859 "dhgroup": "ffdhe6144" 00:18:06.859 } 00:18:06.859 } 00:18:06.859 ]' 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.859 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.216 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.216 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.216 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.216 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.216 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.216 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:18:07.216 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.206 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.207 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.207 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.207 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.207 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.207 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.207 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.207 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.467 00:18:08.467 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.467 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.467 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.728 { 00:18:08.728 "cntlid": 133, 00:18:08.728 "qid": 0, 00:18:08.728 "state": "enabled", 00:18:08.728 "thread": "nvmf_tgt_poll_group_000", 00:18:08.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.728 "listen_address": { 00:18:08.728 "trtype": "TCP", 00:18:08.728 "adrfam": "IPv4", 00:18:08.728 "traddr": "10.0.0.2", 00:18:08.728 "trsvcid": "4420" 00:18:08.728 }, 00:18:08.728 "peer_address": { 00:18:08.728 "trtype": "TCP", 00:18:08.728 "adrfam": "IPv4", 00:18:08.728 "traddr": "10.0.0.1", 00:18:08.728 "trsvcid": "45974" 00:18:08.728 }, 00:18:08.728 "auth": { 00:18:08.728 "state": "completed", 00:18:08.728 "digest": "sha512", 00:18:08.728 "dhgroup": "ffdhe6144" 00:18:08.728 } 00:18:08.728 } 00:18:08.728 ]' 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.728 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.988 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.988 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.988 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.988 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:18:08.989 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.932 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.505 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.505 { 00:18:10.505 "cntlid": 135, 00:18:10.505 "qid": 0, 00:18:10.505 "state": "enabled", 00:18:10.505 "thread": "nvmf_tgt_poll_group_000", 00:18:10.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.505 "listen_address": { 00:18:10.505 "trtype": "TCP", 00:18:10.505 "adrfam": "IPv4", 00:18:10.505 "traddr": "10.0.0.2", 00:18:10.505 "trsvcid": "4420" 00:18:10.505 }, 00:18:10.505 "peer_address": { 00:18:10.505 "trtype": "TCP", 00:18:10.505 "adrfam": "IPv4", 00:18:10.505 "traddr": "10.0.0.1", 00:18:10.505 "trsvcid": "46000" 00:18:10.505 }, 00:18:10.505 "auth": { 00:18:10.505 "state": "completed", 00:18:10.505 "digest": "sha512", 00:18:10.505 "dhgroup": "ffdhe6144" 00:18:10.505 } 00:18:10.505 } 00:18:10.505 ]' 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.505 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.766 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.767 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.767 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.767 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.767 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.767 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:11.026 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.598 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.859 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.430 00:18:12.430 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.430 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.430 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.692 { 00:18:12.692 "cntlid": 137, 00:18:12.692 "qid": 0, 00:18:12.692 "state": "enabled", 00:18:12.692 "thread": "nvmf_tgt_poll_group_000", 00:18:12.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.692 "listen_address": { 00:18:12.692 "trtype": "TCP", 00:18:12.692 "adrfam": "IPv4", 00:18:12.692 "traddr": "10.0.0.2", 00:18:12.692 "trsvcid": "4420" 00:18:12.692 }, 00:18:12.692 "peer_address": { 00:18:12.692 "trtype": "TCP", 00:18:12.692 "adrfam": "IPv4", 00:18:12.692 "traddr": "10.0.0.1", 00:18:12.692 "trsvcid": "46020" 00:18:12.692 }, 00:18:12.692 "auth": { 00:18:12.692 "state": "completed", 00:18:12.692 "digest": "sha512", 00:18:12.692 "dhgroup": "ffdhe8192" 00:18:12.692 } 00:18:12.692 } 00:18:12.692 ]' 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.692 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.692 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.692 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.692 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.953 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:18:12.953 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:18:13.526 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.787 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.787 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.787 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.787 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.787 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.787 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.787 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.787 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.358 00:18:14.358 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.358 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.358 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.618 { 00:18:14.618 "cntlid": 139, 00:18:14.618 "qid": 0, 00:18:14.618 "state": "enabled", 00:18:14.618 "thread": "nvmf_tgt_poll_group_000", 00:18:14.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.618 "listen_address": { 00:18:14.618 "trtype": "TCP", 00:18:14.618 "adrfam": "IPv4", 00:18:14.618 "traddr": "10.0.0.2", 00:18:14.618 "trsvcid": "4420" 00:18:14.618 }, 00:18:14.618 "peer_address": { 00:18:14.618 "trtype": "TCP", 00:18:14.618 "adrfam": "IPv4", 00:18:14.618 "traddr": "10.0.0.1", 00:18:14.618 "trsvcid": "46050" 00:18:14.618 }, 00:18:14.618 "auth": { 00:18:14.618 "state": "completed", 00:18:14.618 "digest": "sha512", 00:18:14.618 "dhgroup": "ffdhe8192" 00:18:14.618 } 00:18:14.618 } 00:18:14.618 ]' 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.618 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.618 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.618 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.618 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.880 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:18:14.880 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: --dhchap-ctrl-secret DHHC-1:02:YjRlNmU2YWYyYWM0NjZlMTI5OWE0ODk0ODc3N2EyNDY0NTUwZWJlMjdiZWZmY2Vhwo+hPw==: 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.826 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.826 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.398 00:18:16.398 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.398 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.398 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.659 { 00:18:16.659 "cntlid": 141, 00:18:16.659 "qid": 0, 00:18:16.659 "state": "enabled", 00:18:16.659 "thread": "nvmf_tgt_poll_group_000", 00:18:16.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.659 "listen_address": { 00:18:16.659 "trtype": "TCP", 00:18:16.659 "adrfam": "IPv4", 00:18:16.659 "traddr": "10.0.0.2", 00:18:16.659 "trsvcid": "4420" 00:18:16.659 }, 00:18:16.659 "peer_address": { 00:18:16.659 "trtype": "TCP", 00:18:16.659 "adrfam": "IPv4", 00:18:16.659 "traddr": "10.0.0.1", 00:18:16.659 "trsvcid": "46078" 00:18:16.659 }, 00:18:16.659 "auth": { 00:18:16.659 "state": "completed", 00:18:16.659 "digest": "sha512", 00:18:16.659 "dhgroup": "ffdhe8192" 00:18:16.659 } 00:18:16.659 } 00:18:16.659 ]' 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.659 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.659 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.659 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.659 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.920 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:18:16.920 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:01:MmU2OTBkYWM2NmQ3MmIxYmJiM2Y5MjRmN2JjYTM3OGGcvALT: 00:18:17.491 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.491 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.491 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.491 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.752 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.752 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.752 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.752 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.752 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.324 00:18:18.324 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.324 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.324 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.586 { 00:18:18.586 "cntlid": 143, 00:18:18.586 "qid": 0, 00:18:18.586 "state": "enabled", 00:18:18.586 "thread": "nvmf_tgt_poll_group_000", 00:18:18.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.586 "listen_address": { 00:18:18.586 "trtype": "TCP", 00:18:18.586 "adrfam": "IPv4", 00:18:18.586 "traddr": "10.0.0.2", 00:18:18.586 "trsvcid": "4420" 00:18:18.586 }, 00:18:18.586 "peer_address": { 00:18:18.586 "trtype": "TCP", 00:18:18.586 "adrfam": "IPv4", 00:18:18.586 "traddr": "10.0.0.1", 00:18:18.586 "trsvcid": "42024" 00:18:18.586 }, 00:18:18.586 "auth": { 00:18:18.586 "state": "completed", 00:18:18.586 "digest": "sha512", 00:18:18.586 "dhgroup": "ffdhe8192" 00:18:18.586 } 00:18:18.586 } 00:18:18.586 ]' 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.586 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.848 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:18.848 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:19.419 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.681 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.682 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.255 00:18:20.255 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.255 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.255 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.515 { 00:18:20.515 "cntlid": 145, 00:18:20.515 "qid": 0, 00:18:20.515 "state": "enabled", 00:18:20.515 "thread": "nvmf_tgt_poll_group_000", 00:18:20.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.515 "listen_address": { 00:18:20.515 "trtype": "TCP", 00:18:20.515 "adrfam": "IPv4", 00:18:20.515 "traddr": "10.0.0.2", 00:18:20.515 "trsvcid": "4420" 00:18:20.515 }, 00:18:20.515 "peer_address": { 00:18:20.515 "trtype": "TCP", 00:18:20.515 "adrfam": "IPv4", 00:18:20.515 "traddr": "10.0.0.1", 00:18:20.515 "trsvcid": "42050" 00:18:20.515 }, 00:18:20.515 "auth": { 00:18:20.515 "state": "completed", 00:18:20.515 "digest": "sha512", 00:18:20.515 "dhgroup": "ffdhe8192" 00:18:20.515 } 00:18:20.515 } 00:18:20.515 ]' 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.515 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.776 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:18:20.776 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmIxMDFjMGRmYmFlODIwOGYwYTRkNmE1MTcyZWIyZDUyNDE4MzNlNTU1ODE1ZmNildVHOQ==: --dhchap-ctrl-secret DHHC-1:03:YTQ2OTAxNzVjMjMzYzFjYTRjMzE1MjNiZDZiZDJlNzNhOGZjY2MyNzA0NzI1MTUxOThhMGYyOTAzNDhmM2FlZb/zTII=: 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.348 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:21.609 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:21.871 request: 00:18:21.871 { 00:18:21.871 "name": "nvme0", 00:18:21.871 "trtype": "tcp", 00:18:21.871 "traddr": "10.0.0.2", 00:18:21.871 "adrfam": "ipv4", 00:18:21.871 "trsvcid": "4420", 00:18:21.871 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.871 "prchk_reftag": false, 00:18:21.871 "prchk_guard": false, 00:18:21.871 "hdgst": false, 00:18:21.871 "ddgst": false, 00:18:21.871 "dhchap_key": "key2", 00:18:21.871 "allow_unrecognized_csi": false, 00:18:21.871 "method": "bdev_nvme_attach_controller", 00:18:21.871 "req_id": 1 00:18:21.871 } 00:18:21.871 Got JSON-RPC error response 00:18:21.871 response: 00:18:21.871 { 00:18:21.871 "code": -5, 00:18:21.871 "message": "Input/output error" 00:18:21.871 } 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.871 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:22.132 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.132 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.132 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.133 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.394 request: 00:18:22.394 { 00:18:22.394 "name": "nvme0", 00:18:22.394 "trtype": "tcp", 00:18:22.394 "traddr": "10.0.0.2", 00:18:22.394 "adrfam": "ipv4", 00:18:22.394 "trsvcid": "4420", 00:18:22.394 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.394 "prchk_reftag": false, 00:18:22.394 "prchk_guard": false, 00:18:22.394 "hdgst": false, 00:18:22.394 "ddgst": false, 00:18:22.394 "dhchap_key": "key1", 00:18:22.394 "dhchap_ctrlr_key": "ckey2", 00:18:22.394 "allow_unrecognized_csi": false, 00:18:22.394 "method": "bdev_nvme_attach_controller", 00:18:22.394 "req_id": 1 00:18:22.394 } 00:18:22.394 Got JSON-RPC error response 00:18:22.394 response: 00:18:22.394 { 00:18:22.394 "code": -5, 00:18:22.394 "message": "Input/output error" 00:18:22.394 } 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.394 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.966 request: 00:18:22.966 { 00:18:22.966 "name": "nvme0", 00:18:22.966 "trtype": "tcp", 00:18:22.966 "traddr": "10.0.0.2", 00:18:22.966 "adrfam": "ipv4", 00:18:22.966 "trsvcid": "4420", 00:18:22.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.966 "prchk_reftag": false, 00:18:22.966 "prchk_guard": false, 00:18:22.966 "hdgst": false, 00:18:22.966 "ddgst": false, 00:18:22.966 "dhchap_key": "key1", 00:18:22.966 "dhchap_ctrlr_key": "ckey1", 00:18:22.966 "allow_unrecognized_csi": false, 00:18:22.966 "method": "bdev_nvme_attach_controller", 00:18:22.966 "req_id": 1 00:18:22.966 } 00:18:22.966 Got JSON-RPC error response 00:18:22.966 response: 00:18:22.966 { 00:18:22.966 "code": -5, 00:18:22.966 "message": "Input/output error" 00:18:22.966 } 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3226435 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3226435 ']' 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3226435 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3226435 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3226435' 00:18:22.966 killing process with pid 3226435 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3226435 00:18:22.966 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3226435 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3253852 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3253852 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3253852 ']' 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.227 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3253852 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3253852 ']' 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 null0 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.liT 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.IQn ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IQn 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rDC 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.pDV ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pDV 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.amB 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Fga ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fga 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tXH 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.430 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.372 nvme0n1 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.372 { 00:18:25.372 "cntlid": 1, 00:18:25.372 "qid": 0, 00:18:25.372 "state": "enabled", 00:18:25.372 "thread": "nvmf_tgt_poll_group_000", 00:18:25.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.372 "listen_address": { 00:18:25.372 "trtype": "TCP", 00:18:25.372 "adrfam": "IPv4", 00:18:25.372 "traddr": "10.0.0.2", 00:18:25.372 "trsvcid": "4420" 00:18:25.372 }, 00:18:25.372 "peer_address": { 00:18:25.372 "trtype": "TCP", 00:18:25.372 "adrfam": "IPv4", 00:18:25.372 "traddr": "10.0.0.1", 00:18:25.372 "trsvcid": "42110" 00:18:25.372 }, 00:18:25.372 "auth": { 00:18:25.372 "state": "completed", 00:18:25.372 "digest": "sha512", 00:18:25.372 "dhgroup": "ffdhe8192" 00:18:25.372 } 00:18:25.372 } 00:18:25.372 ]' 00:18:25.372 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.633 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.633 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.633 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.633 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.633 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.633 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.633 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.893 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:25.893 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:26.462 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.462 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.462 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.462 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.723 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.723 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:26.723 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.723 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.723 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.723 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:26.723 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.723 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.984 request: 00:18:26.984 { 00:18:26.984 "name": "nvme0", 00:18:26.984 "trtype": "tcp", 00:18:26.984 "traddr": "10.0.0.2", 00:18:26.984 "adrfam": "ipv4", 00:18:26.984 "trsvcid": "4420", 00:18:26.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.984 "prchk_reftag": false, 00:18:26.984 "prchk_guard": false, 00:18:26.984 "hdgst": false, 00:18:26.984 "ddgst": false, 00:18:26.984 "dhchap_key": "key3", 00:18:26.984 "allow_unrecognized_csi": false, 00:18:26.984 "method": "bdev_nvme_attach_controller", 00:18:26.984 "req_id": 1 00:18:26.984 } 00:18:26.984 Got JSON-RPC error response 00:18:26.984 response: 00:18:26.984 { 00:18:26.984 "code": -5, 00:18:26.984 "message": "Input/output error" 00:18:26.984 } 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:26.984 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:27.245 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.245 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:27.245 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.245 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.245 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.246 request: 00:18:27.246 { 00:18:27.246 "name": "nvme0", 00:18:27.246 "trtype": "tcp", 00:18:27.246 "traddr": "10.0.0.2", 00:18:27.246 "adrfam": "ipv4", 00:18:27.246 "trsvcid": "4420", 00:18:27.246 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.246 "prchk_reftag": false, 00:18:27.246 "prchk_guard": false, 00:18:27.246 "hdgst": false, 00:18:27.246 "ddgst": false, 00:18:27.246 "dhchap_key": "key3", 00:18:27.246 "allow_unrecognized_csi": false, 00:18:27.246 "method": "bdev_nvme_attach_controller", 00:18:27.246 "req_id": 1 00:18:27.246 } 00:18:27.246 Got JSON-RPC error response 00:18:27.246 response: 00:18:27.246 { 00:18:27.246 "code": -5, 00:18:27.246 "message": "Input/output error" 00:18:27.246 } 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.246 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.506 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.767 request: 00:18:27.767 { 00:18:27.767 "name": "nvme0", 00:18:27.767 "trtype": "tcp", 00:18:27.767 "traddr": "10.0.0.2", 00:18:27.767 "adrfam": "ipv4", 00:18:27.767 "trsvcid": "4420", 00:18:27.767 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.767 "prchk_reftag": false, 00:18:27.767 "prchk_guard": false, 00:18:27.767 "hdgst": false, 00:18:27.767 "ddgst": false, 00:18:27.767 "dhchap_key": "key0", 00:18:27.767 "dhchap_ctrlr_key": "key1", 00:18:27.767 "allow_unrecognized_csi": false, 00:18:27.767 "method": "bdev_nvme_attach_controller", 00:18:27.767 "req_id": 1 00:18:27.767 } 00:18:27.767 Got JSON-RPC error response 00:18:27.767 response: 00:18:27.767 { 00:18:27.767 "code": -5, 00:18:27.767 "message": "Input/output error" 00:18:27.767 } 00:18:27.767 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:27.767 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.767 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.767 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.767 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:27.767 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:27.767 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:28.027 nvme0n1 00:18:28.027 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:28.027 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:28.027 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.287 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.287 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.287 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.547 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:28.547 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.547 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.547 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.547 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:28.547 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.547 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:29.487 nvme0n1 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:29.487 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.747 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.747 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:29.747 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: --dhchap-ctrl-secret DHHC-1:03:MTExNDJmODFhOGZmOGE1NzFlNTI4OWYyN2I3ZTI3MzExMGUyZGNiNWViYjBhZmMyZDIwOGQyMjU1MzIyMTU1YgCyZpY=: 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.319 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:30.579 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:31.150 request: 00:18:31.150 { 00:18:31.150 "name": "nvme0", 00:18:31.150 "trtype": "tcp", 00:18:31.150 "traddr": "10.0.0.2", 00:18:31.150 "adrfam": "ipv4", 00:18:31.150 "trsvcid": "4420", 00:18:31.150 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.150 "prchk_reftag": false, 00:18:31.150 "prchk_guard": false, 00:18:31.150 "hdgst": false, 00:18:31.150 "ddgst": false, 00:18:31.150 "dhchap_key": "key1", 00:18:31.150 "allow_unrecognized_csi": false, 00:18:31.150 "method": "bdev_nvme_attach_controller", 00:18:31.150 "req_id": 1 00:18:31.150 } 00:18:31.150 Got JSON-RPC error response 00:18:31.150 response: 00:18:31.150 { 00:18:31.150 "code": -5, 00:18:31.150 "message": "Input/output error" 00:18:31.150 } 00:18:31.150 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:31.150 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.150 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.150 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.150 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.150 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.150 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.093 nvme0n1 00:18:32.093 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:32.093 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:32.093 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.093 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.093 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.093 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.354 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.354 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.354 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.354 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.354 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:32.354 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:32.354 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:32.615 nvme0n1 00:18:32.615 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:32.615 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:32.615 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.615 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.615 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.615 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: '' 2s 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: ]] 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTI1NGJhMDY4Y2M0NWFjYmM4MmUzOWEyZjc2NDE0MTG4htb7: 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:32.875 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:34.790 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:34.790 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:35.051 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:35.051 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: 2s 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: ]] 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGE1MmZlZTg5ZWQ0M2QzNmE3Y2MzMjFhZTU3ZTk0YWNjMWUzY2NhMjE2NDg2NTk14XFr4g==: 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:35.052 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.968 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.911 nvme0n1 00:18:37.911 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.911 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.911 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.911 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.911 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.911 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:38.483 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:38.744 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:38.744 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:38.744 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.006 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.578 request: 00:18:39.578 { 00:18:39.578 "name": "nvme0", 00:18:39.578 "dhchap_key": "key1", 00:18:39.578 "dhchap_ctrlr_key": "key3", 00:18:39.578 "method": "bdev_nvme_set_keys", 00:18:39.578 "req_id": 1 00:18:39.578 } 00:18:39.578 Got JSON-RPC error response 00:18:39.578 response: 00:18:39.578 { 00:18:39.578 "code": -13, 00:18:39.578 "message": "Permission denied" 00:18:39.578 } 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:39.578 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:40.963 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:40.963 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:40.963 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.963 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:40.963 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.963 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.963 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.964 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.964 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.964 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.964 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.535 nvme0n1 00:18:41.795 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.795 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.795 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.795 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.795 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.795 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:41.795 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.796 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:41.796 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.796 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:41.796 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:41.796 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.796 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:42.366 request: 00:18:42.366 { 00:18:42.366 "name": "nvme0", 00:18:42.366 "dhchap_key": "key2", 00:18:42.366 "dhchap_ctrlr_key": "key0", 00:18:42.366 "method": "bdev_nvme_set_keys", 00:18:42.366 "req_id": 1 00:18:42.366 } 00:18:42.366 Got JSON-RPC error response 00:18:42.366 response: 00:18:42.366 { 00:18:42.366 "code": -13, 00:18:42.366 "message": "Permission denied" 00:18:42.366 } 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:42.366 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:43.306 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:43.307 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:43.307 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3226455 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3226455 ']' 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3226455 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3226455 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3226455' 00:18:43.567 killing process with pid 3226455 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3226455 00:18:43.567 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3226455 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.828 rmmod nvme_tcp 00:18:43.828 rmmod nvme_fabrics 00:18:43.828 rmmod nvme_keyring 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3253852 ']' 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3253852 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3253852 ']' 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3253852 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3253852 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3253852' 00:18:43.828 killing process with pid 3253852 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3253852 00:18:43.828 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3253852 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.088 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.089 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.089 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.liT /tmp/spdk.key-sha256.rDC /tmp/spdk.key-sha384.amB /tmp/spdk.key-sha512.tXH /tmp/spdk.key-sha512.IQn /tmp/spdk.key-sha384.pDV /tmp/spdk.key-sha256.Fga '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:46.633 00:18:46.633 real 2m44.223s 00:18:46.633 user 6m7.225s 00:18:46.633 sys 0m24.453s 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.633 ************************************ 00:18:46.633 END TEST nvmf_auth_target 00:18:46.633 ************************************ 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.633 ************************************ 00:18:46.633 START TEST nvmf_bdevio_no_huge 00:18:46.633 ************************************ 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:46.633 * Looking for test storage... 00:18:46.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.633 --rc genhtml_branch_coverage=1 00:18:46.633 --rc genhtml_function_coverage=1 00:18:46.633 --rc genhtml_legend=1 00:18:46.633 --rc geninfo_all_blocks=1 00:18:46.633 --rc geninfo_unexecuted_blocks=1 00:18:46.633 00:18:46.633 ' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.633 --rc genhtml_branch_coverage=1 00:18:46.633 --rc genhtml_function_coverage=1 00:18:46.633 --rc genhtml_legend=1 00:18:46.633 --rc geninfo_all_blocks=1 00:18:46.633 --rc geninfo_unexecuted_blocks=1 00:18:46.633 00:18:46.633 ' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.633 --rc genhtml_branch_coverage=1 00:18:46.633 --rc genhtml_function_coverage=1 00:18:46.633 --rc genhtml_legend=1 00:18:46.633 --rc geninfo_all_blocks=1 00:18:46.633 --rc geninfo_unexecuted_blocks=1 00:18:46.633 00:18:46.633 ' 00:18:46.633 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.633 --rc genhtml_branch_coverage=1 00:18:46.633 --rc genhtml_function_coverage=1 00:18:46.633 --rc genhtml_legend=1 00:18:46.633 --rc geninfo_all_blocks=1 00:18:46.633 --rc geninfo_unexecuted_blocks=1 00:18:46.633 00:18:46.633 ' 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.634 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:54.773 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:54.773 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.773 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:54.774 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:54.774 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:54.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:18:54.774 00:18:54.774 --- 10.0.0.2 ping statistics --- 00:18:54.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.774 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:18:54.774 00:18:54.774 --- 10.0.0.1 ping statistics --- 00:18:54.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.774 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.774 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3262219 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3262219 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3262219 ']' 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.774 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.774 [2024-11-06 11:00:45.066890] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:18:54.775 [2024-11-06 11:00:45.066966] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:54.775 [2024-11-06 11:00:45.175125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:54.775 [2024-11-06 11:00:45.235698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.775 [2024-11-06 11:00:45.235754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.775 [2024-11-06 11:00:45.235763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.775 [2024-11-06 11:00:45.235770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.775 [2024-11-06 11:00:45.235776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.775 [2024-11-06 11:00:45.237620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:54.775 [2024-11-06 11:00:45.237931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:54.775 [2024-11-06 11:00:45.238151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:54.775 [2024-11-06 11:00:45.238249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.775 [2024-11-06 11:00:45.950035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.775 Malloc0 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.775 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.775 [2024-11-06 11:00:46.003838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:54.775 { 00:18:54.775 "params": { 00:18:54.775 "name": "Nvme$subsystem", 00:18:54.775 "trtype": "$TEST_TRANSPORT", 00:18:54.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.775 "adrfam": "ipv4", 00:18:54.775 "trsvcid": "$NVMF_PORT", 00:18:54.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.775 "hdgst": ${hdgst:-false}, 00:18:54.775 "ddgst": ${ddgst:-false} 00:18:54.775 }, 00:18:54.775 "method": "bdev_nvme_attach_controller" 00:18:54.775 } 00:18:54.775 EOF 00:18:54.775 )") 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:54.775 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:54.775 "params": { 00:18:54.775 "name": "Nvme1", 00:18:54.775 "trtype": "tcp", 00:18:54.775 "traddr": "10.0.0.2", 00:18:54.775 "adrfam": "ipv4", 00:18:54.775 "trsvcid": "4420", 00:18:54.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.775 "hdgst": false, 00:18:54.775 "ddgst": false 00:18:54.775 }, 00:18:54.775 "method": "bdev_nvme_attach_controller" 00:18:54.775 }' 00:18:54.775 [2024-11-06 11:00:46.063337] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:18:54.775 [2024-11-06 11:00:46.063406] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3262277 ] 00:18:54.775 [2024-11-06 11:00:46.143932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:55.036 [2024-11-06 11:00:46.199652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.036 [2024-11-06 11:00:46.199777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.036 [2024-11-06 11:00:46.199786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.296 I/O targets: 00:18:55.296 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:55.296 00:18:55.296 00:18:55.296 CUnit - A unit testing framework for C - Version 2.1-3 00:18:55.296 http://cunit.sourceforge.net/ 00:18:55.296 00:18:55.296 00:18:55.296 Suite: bdevio tests on: Nvme1n1 00:18:55.296 Test: blockdev write read block ...passed 00:18:55.296 Test: blockdev write zeroes read block ...passed 00:18:55.296 Test: blockdev write zeroes read no split ...passed 00:18:55.297 Test: blockdev write zeroes read split ...passed 00:18:55.297 Test: blockdev write zeroes read split partial ...passed 00:18:55.297 Test: blockdev reset ...[2024-11-06 11:00:46.703078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:55.297 [2024-11-06 11:00:46.703138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200c800 (9): Bad file descriptor 00:18:55.557 [2024-11-06 11:00:46.723453] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:55.557 passed 00:18:55.557 Test: blockdev write read 8 blocks ...passed 00:18:55.557 Test: blockdev write read size > 128k ...passed 00:18:55.557 Test: blockdev write read invalid size ...passed 00:18:55.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:55.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:55.557 Test: blockdev write read max offset ...passed 00:18:55.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:55.557 Test: blockdev writev readv 8 blocks ...passed 00:18:55.557 Test: blockdev writev readv 30 x 1block ...passed 00:18:55.557 Test: blockdev writev readv block ...passed 00:18:55.557 Test: blockdev writev readv size > 128k ...passed 00:18:55.557 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:55.557 Test: blockdev comparev and writev ...[2024-11-06 11:00:46.947618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.947644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:55.557 [2024-11-06 11:00:46.947655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.947661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:55.557 [2024-11-06 11:00:46.948088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.948097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:55.557 [2024-11-06 11:00:46.948107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.948112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:55.557 [2024-11-06 11:00:46.948548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.948557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:55.557 [2024-11-06 11:00:46.948566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.948571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:55.557 [2024-11-06 11:00:46.949038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.949048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:55.557 [2024-11-06 11:00:46.949057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:55.557 [2024-11-06 11:00:46.949062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.817 passed 00:18:55.818 Test: blockdev nvme passthru rw ...passed 00:18:55.818 Test: blockdev nvme passthru vendor specific ...[2024-11-06 11:00:47.033503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.818 [2024-11-06 11:00:47.033513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.818 [2024-11-06 11:00:47.033872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.818 [2024-11-06 11:00:47.033880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:55.818 [2024-11-06 11:00:47.034253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.818 [2024-11-06 11:00:47.034265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.818 [2024-11-06 11:00:47.034606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.818 [2024-11-06 11:00:47.034614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:55.818 passed 00:18:55.818 Test: blockdev nvme admin passthru ...passed 00:18:55.818 Test: blockdev copy ...passed 00:18:55.818 00:18:55.818 Run Summary: Type Total Ran Passed Failed Inactive 00:18:55.818 suites 1 1 n/a 0 0 00:18:55.818 tests 23 23 23 0 0 00:18:55.818 asserts 152 152 152 0 n/a 00:18:55.818 00:18:55.818 Elapsed time = 1.156 seconds 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.078 rmmod nvme_tcp 00:18:56.078 rmmod nvme_fabrics 00:18:56.078 rmmod nvme_keyring 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:56.078 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:56.079 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3262219 ']' 00:18:56.079 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3262219 00:18:56.079 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3262219 ']' 00:18:56.079 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3262219 00:18:56.079 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:56.079 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.079 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3262219 00:18:56.339 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:56.339 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:56.339 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3262219' 00:18:56.339 killing process with pid 3262219 00:18:56.339 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3262219 00:18:56.339 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3262219 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.148 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:59.148 00:18:59.148 real 0m12.419s 00:18:59.148 user 0m14.725s 00:18:59.148 sys 0m6.506s 00:18:59.148 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:59.148 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:59.148 ************************************ 00:18:59.148 END TEST nvmf_bdevio_no_huge 00:18:59.148 ************************************ 00:18:59.148 11:00:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:59.148 11:00:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:59.148 11:00:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:59.148 11:00:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.148 ************************************ 00:18:59.148 START TEST nvmf_tls 00:18:59.148 ************************************ 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:59.148 * Looking for test storage... 00:18:59.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.148 --rc genhtml_branch_coverage=1 00:18:59.148 --rc genhtml_function_coverage=1 00:18:59.148 --rc genhtml_legend=1 00:18:59.148 --rc geninfo_all_blocks=1 00:18:59.148 --rc geninfo_unexecuted_blocks=1 00:18:59.148 00:18:59.148 ' 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.148 --rc genhtml_branch_coverage=1 00:18:59.148 --rc genhtml_function_coverage=1 00:18:59.148 --rc genhtml_legend=1 00:18:59.148 --rc geninfo_all_blocks=1 00:18:59.148 --rc geninfo_unexecuted_blocks=1 00:18:59.148 00:18:59.148 ' 00:18:59.148 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:59.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.148 --rc genhtml_branch_coverage=1 00:18:59.148 --rc genhtml_function_coverage=1 00:18:59.149 --rc genhtml_legend=1 00:18:59.149 --rc geninfo_all_blocks=1 00:18:59.149 --rc geninfo_unexecuted_blocks=1 00:18:59.149 00:18:59.149 ' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:59.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.149 --rc genhtml_branch_coverage=1 00:18:59.149 --rc genhtml_function_coverage=1 00:18:59.149 --rc genhtml_legend=1 00:18:59.149 --rc geninfo_all_blocks=1 00:18:59.149 --rc geninfo_unexecuted_blocks=1 00:18:59.149 00:18:59.149 ' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.149 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:07.286 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:07.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:07.287 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:07.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:07.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:07.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:19:07.287 00:19:07.287 --- 10.0.0.2 ping statistics --- 00:19:07.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.287 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:19:07.287 00:19:07.287 --- 10.0.0.1 ping statistics --- 00:19:07.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.287 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3266936 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3266936 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:07.287 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3266936 ']' 00:19:07.288 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.288 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.288 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.288 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.288 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.288 [2024-11-06 11:00:57.761370] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:07.288 [2024-11-06 11:00:57.761440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.288 [2024-11-06 11:00:57.861228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.288 [2024-11-06 11:00:57.910604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.288 [2024-11-06 11:00:57.910659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.288 [2024-11-06 11:00:57.910668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.288 [2024-11-06 11:00:57.910675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.288 [2024-11-06 11:00:57.910682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.288 [2024-11-06 11:00:57.911459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:07.288 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:07.548 true 00:19:07.548 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.548 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:07.807 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:07.807 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:07.807 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:07.807 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.807 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:08.067 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:08.067 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:08.067 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:08.326 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.326 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:08.326 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:08.326 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:08.326 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.326 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:08.585 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:08.585 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:08.585 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:08.845 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.845 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:08.845 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:08.845 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:08.845 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:09.105 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.105 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.qURjSD0Vy7 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.zyRMkz6vkV 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.qURjSD0Vy7 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.zyRMkz6vkV 00:19:09.366 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:09.626 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:09.886 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.qURjSD0Vy7 00:19:09.886 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qURjSD0Vy7 00:19:09.886 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.886 [2024-11-06 11:01:01.253528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.886 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:10.145 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:10.145 [2024-11-06 11:01:01.562282] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.145 [2024-11-06 11:01:01.562499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.404 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:10.404 malloc0 00:19:10.404 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:10.663 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qURjSD0Vy7 00:19:10.663 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.927 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qURjSD0Vy7 00:19:20.917 Initializing NVMe Controllers 00:19:20.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:20.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:20.917 Initialization complete. Launching workers. 00:19:20.917 ======================================================== 00:19:20.917 Latency(us) 00:19:20.917 Device Information : IOPS MiB/s Average min max 00:19:20.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18616.67 72.72 3437.83 1240.46 4192.46 00:19:20.917 ======================================================== 00:19:20.917 Total : 18616.67 72.72 3437.83 1240.46 4192.46 00:19:20.917 00:19:20.917 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qURjSD0Vy7 00:19:20.917 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.917 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.917 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.917 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qURjSD0Vy7 00:19:20.917 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3269686 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3269686 /var/tmp/bdevperf.sock 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3269686 ']' 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.178 [2024-11-06 11:01:12.386659] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:21.178 [2024-11-06 11:01:12.386719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269686 ] 00:19:21.178 [2024-11-06 11:01:12.444190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.178 [2024-11-06 11:01:12.473050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:21.178 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qURjSD0Vy7 00:19:21.440 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.440 [2024-11-06 11:01:12.858428] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.701 TLSTESTn1 00:19:21.701 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:21.701 Running I/O for 10 seconds... 00:19:24.022 4425.00 IOPS, 17.29 MiB/s [2024-11-06T10:01:16.111Z] 5251.50 IOPS, 20.51 MiB/s [2024-11-06T10:01:17.096Z] 5742.00 IOPS, 22.43 MiB/s [2024-11-06T10:01:18.480Z] 5753.00 IOPS, 22.47 MiB/s [2024-11-06T10:01:19.422Z] 5689.80 IOPS, 22.23 MiB/s [2024-11-06T10:01:20.364Z] 5803.83 IOPS, 22.67 MiB/s [2024-11-06T10:01:21.307Z] 5851.43 IOPS, 22.86 MiB/s [2024-11-06T10:01:22.249Z] 5833.88 IOPS, 22.79 MiB/s [2024-11-06T10:01:23.190Z] 5832.89 IOPS, 22.78 MiB/s [2024-11-06T10:01:23.190Z] 5810.80 IOPS, 22.70 MiB/s 00:19:31.768 Latency(us) 00:19:31.768 [2024-11-06T10:01:23.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.768 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.768 Verification LBA range: start 0x0 length 0x2000 00:19:31.768 TLSTESTn1 : 10.04 5802.17 22.66 0.00 0.00 22011.59 4450.99 83449.17 00:19:31.768 [2024-11-06T10:01:23.190Z] =================================================================================================================== 00:19:31.768 [2024-11-06T10:01:23.190Z] Total : 5802.17 22.66 0.00 0.00 22011.59 4450.99 83449.17 00:19:31.768 { 00:19:31.768 "results": [ 00:19:31.768 { 00:19:31.768 "job": "TLSTESTn1", 00:19:31.768 "core_mask": "0x4", 00:19:31.768 "workload": "verify", 00:19:31.768 "status": "finished", 00:19:31.768 "verify_range": { 00:19:31.768 "start": 0, 00:19:31.768 "length": 8192 00:19:31.768 }, 00:19:31.768 "queue_depth": 128, 00:19:31.768 "io_size": 4096, 00:19:31.768 "runtime": 10.036758, 00:19:31.768 "iops": 5802.172374784766, 00:19:31.768 "mibps": 22.664735839002994, 00:19:31.768 "io_failed": 0, 00:19:31.768 "io_timeout": 0, 00:19:31.768 "avg_latency_us": 22011.59358507198, 00:19:31.768 "min_latency_us": 4450.986666666667, 00:19:31.768 "max_latency_us": 83449.17333333334 00:19:31.768 } 00:19:31.768 ], 00:19:31.768 "core_count": 1 00:19:31.768 } 00:19:31.768 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:31.768 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3269686 00:19:31.768 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3269686 ']' 00:19:31.768 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3269686 00:19:31.768 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:31.768 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.768 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3269686 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3269686' 00:19:32.030 killing process with pid 3269686 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3269686 00:19:32.030 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.030 00:19:32.030 Latency(us) 00:19:32.030 [2024-11-06T10:01:23.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.030 [2024-11-06T10:01:23.452Z] =================================================================================================================== 00:19:32.030 [2024-11-06T10:01:23.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3269686 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zyRMkz6vkV 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zyRMkz6vkV 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zyRMkz6vkV 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zyRMkz6vkV 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3271835 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3271835 /var/tmp/bdevperf.sock 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3271835 ']' 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.030 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.030 [2024-11-06 11:01:23.348772] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:32.030 [2024-11-06 11:01:23.348835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271835 ] 00:19:32.030 [2024-11-06 11:01:23.409661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.030 [2024-11-06 11:01:23.438015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.291 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.291 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.291 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zyRMkz6vkV 00:19:32.291 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.552 [2024-11-06 11:01:23.799282] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.552 [2024-11-06 11:01:23.806237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:32.552 [2024-11-06 11:01:23.806501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb19bb0 (107): Transport endpoint is not connected 00:19:32.552 [2024-11-06 11:01:23.807497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb19bb0 (9): Bad file descriptor 00:19:32.552 [2024-11-06 11:01:23.808499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:32.552 [2024-11-06 11:01:23.808507] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:32.552 [2024-11-06 11:01:23.808514] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:32.552 [2024-11-06 11:01:23.808521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:32.552 request: 00:19:32.552 { 00:19:32.552 "name": "TLSTEST", 00:19:32.552 "trtype": "tcp", 00:19:32.552 "traddr": "10.0.0.2", 00:19:32.552 "adrfam": "ipv4", 00:19:32.552 "trsvcid": "4420", 00:19:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.552 "prchk_reftag": false, 00:19:32.552 "prchk_guard": false, 00:19:32.552 "hdgst": false, 00:19:32.552 "ddgst": false, 00:19:32.552 "psk": "key0", 00:19:32.552 "allow_unrecognized_csi": false, 00:19:32.552 "method": "bdev_nvme_attach_controller", 00:19:32.552 "req_id": 1 00:19:32.552 } 00:19:32.552 Got JSON-RPC error response 00:19:32.552 response: 00:19:32.552 { 00:19:32.552 "code": -5, 00:19:32.552 "message": "Input/output error" 00:19:32.552 } 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3271835 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3271835 ']' 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3271835 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3271835 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3271835' 00:19:32.552 killing process with pid 3271835 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3271835 00:19:32.552 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.552 00:19:32.552 Latency(us) 00:19:32.552 [2024-11-06T10:01:23.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.552 [2024-11-06T10:01:23.974Z] =================================================================================================================== 00:19:32.552 [2024-11-06T10:01:23.974Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.552 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3271835 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qURjSD0Vy7 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qURjSD0Vy7 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qURjSD0Vy7 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qURjSD0Vy7 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3272038 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3272038 /var/tmp/bdevperf.sock 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3272038 ']' 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.815 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.815 [2024-11-06 11:01:24.045256] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:32.815 [2024-11-06 11:01:24.045310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272038 ] 00:19:32.815 [2024-11-06 11:01:24.103997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.815 [2024-11-06 11:01:24.132064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.815 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.815 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.815 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qURjSD0Vy7 00:19:33.076 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:33.337 [2024-11-06 11:01:24.541344] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.337 [2024-11-06 11:01:24.547925] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:33.337 [2024-11-06 11:01:24.547945] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:33.337 [2024-11-06 11:01:24.547963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.337 [2024-11-06 11:01:24.548480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefebb0 (107): Transport endpoint is not connected 00:19:33.337 [2024-11-06 11:01:24.549476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefebb0 (9): Bad file descriptor 00:19:33.337 [2024-11-06 11:01:24.550478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:33.337 [2024-11-06 11:01:24.550486] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.337 [2024-11-06 11:01:24.550493] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:33.337 [2024-11-06 11:01:24.550501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:33.337 request: 00:19:33.337 { 00:19:33.337 "name": "TLSTEST", 00:19:33.337 "trtype": "tcp", 00:19:33.337 "traddr": "10.0.0.2", 00:19:33.337 "adrfam": "ipv4", 00:19:33.337 "trsvcid": "4420", 00:19:33.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.337 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:33.337 "prchk_reftag": false, 00:19:33.337 "prchk_guard": false, 00:19:33.337 "hdgst": false, 00:19:33.337 "ddgst": false, 00:19:33.337 "psk": "key0", 00:19:33.337 "allow_unrecognized_csi": false, 00:19:33.337 "method": "bdev_nvme_attach_controller", 00:19:33.337 "req_id": 1 00:19:33.337 } 00:19:33.338 Got JSON-RPC error response 00:19:33.338 response: 00:19:33.338 { 00:19:33.338 "code": -5, 00:19:33.338 "message": "Input/output error" 00:19:33.338 } 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3272038 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3272038 ']' 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3272038 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3272038 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3272038' 00:19:33.338 killing process with pid 3272038 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3272038 00:19:33.338 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.338 00:19:33.338 Latency(us) 00:19:33.338 [2024-11-06T10:01:24.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.338 [2024-11-06T10:01:24.760Z] =================================================================================================================== 00:19:33.338 [2024-11-06T10:01:24.760Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3272038 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qURjSD0Vy7 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qURjSD0Vy7 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qURjSD0Vy7 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qURjSD0Vy7 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3272070 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3272070 /var/tmp/bdevperf.sock 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3272070 ']' 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.338 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.600 [2024-11-06 11:01:24.793396] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:33.600 [2024-11-06 11:01:24.793452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272070 ] 00:19:33.600 [2024-11-06 11:01:24.852418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.600 [2024-11-06 11:01:24.881102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.600 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.600 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:33.600 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qURjSD0Vy7 00:19:33.861 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.122 [2024-11-06 11:01:25.298542] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.122 [2024-11-06 11:01:25.309773] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:34.122 [2024-11-06 11:01:25.309792] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:34.122 [2024-11-06 11:01:25.309810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:34.122 [2024-11-06 11:01:25.310749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9bb0 (107): Transport endpoint is not connected 00:19:34.122 [2024-11-06 11:01:25.311741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9bb0 (9): Bad file descriptor 00:19:34.122 [2024-11-06 11:01:25.312744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:34.122 [2024-11-06 11:01:25.312760] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:34.122 [2024-11-06 11:01:25.312766] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:34.122 [2024-11-06 11:01:25.312775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:34.122 request: 00:19:34.122 { 00:19:34.122 "name": "TLSTEST", 00:19:34.122 "trtype": "tcp", 00:19:34.122 "traddr": "10.0.0.2", 00:19:34.122 "adrfam": "ipv4", 00:19:34.122 "trsvcid": "4420", 00:19:34.122 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.122 "prchk_reftag": false, 00:19:34.122 "prchk_guard": false, 00:19:34.122 "hdgst": false, 00:19:34.122 "ddgst": false, 00:19:34.122 "psk": "key0", 00:19:34.122 "allow_unrecognized_csi": false, 00:19:34.122 "method": "bdev_nvme_attach_controller", 00:19:34.122 "req_id": 1 00:19:34.122 } 00:19:34.122 Got JSON-RPC error response 00:19:34.122 response: 00:19:34.122 { 00:19:34.122 "code": -5, 00:19:34.122 "message": "Input/output error" 00:19:34.122 } 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3272070 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3272070 ']' 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3272070 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3272070 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3272070' 00:19:34.122 killing process with pid 3272070 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3272070 00:19:34.122 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.122 00:19:34.122 Latency(us) 00:19:34.122 [2024-11-06T10:01:25.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.122 [2024-11-06T10:01:25.544Z] =================================================================================================================== 00:19:34.122 [2024-11-06T10:01:25.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3272070 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3272387 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3272387 /var/tmp/bdevperf.sock 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3272387 ']' 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.122 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.383 [2024-11-06 11:01:25.559401] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:34.383 [2024-11-06 11:01:25.559453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272387 ] 00:19:34.383 [2024-11-06 11:01:25.618158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.383 [2024-11-06 11:01:25.646187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.383 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.383 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:34.383 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:34.644 [2024-11-06 11:01:25.878981] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:34.644 [2024-11-06 11:01:25.879008] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:34.644 request: 00:19:34.644 { 00:19:34.644 "name": "key0", 00:19:34.644 "path": "", 00:19:34.644 "method": "keyring_file_add_key", 00:19:34.644 "req_id": 1 00:19:34.644 } 00:19:34.644 Got JSON-RPC error response 00:19:34.644 response: 00:19:34.644 { 00:19:34.644 "code": -1, 00:19:34.644 "message": "Operation not permitted" 00:19:34.644 } 00:19:34.644 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.644 [2024-11-06 11:01:26.055504] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.644 [2024-11-06 11:01:26.055528] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:34.644 request: 00:19:34.644 { 00:19:34.644 "name": "TLSTEST", 00:19:34.644 "trtype": "tcp", 00:19:34.644 "traddr": "10.0.0.2", 00:19:34.644 "adrfam": "ipv4", 00:19:34.644 "trsvcid": "4420", 00:19:34.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.644 "prchk_reftag": false, 00:19:34.644 "prchk_guard": false, 00:19:34.644 "hdgst": false, 00:19:34.644 "ddgst": false, 00:19:34.644 "psk": "key0", 00:19:34.644 "allow_unrecognized_csi": false, 00:19:34.644 "method": "bdev_nvme_attach_controller", 00:19:34.644 "req_id": 1 00:19:34.644 } 00:19:34.644 Got JSON-RPC error response 00:19:34.644 response: 00:19:34.644 { 00:19:34.644 "code": -126, 00:19:34.644 "message": "Required key not available" 00:19:34.644 } 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3272387 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3272387 ']' 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3272387 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3272387 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3272387' 00:19:34.906 killing process with pid 3272387 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3272387 00:19:34.906 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.906 00:19:34.906 Latency(us) 00:19:34.906 [2024-11-06T10:01:26.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.906 [2024-11-06T10:01:26.328Z] =================================================================================================================== 00:19:34.906 [2024-11-06T10:01:26.328Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3272387 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3266936 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3266936 ']' 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3266936 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3266936 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3266936' 00:19:34.906 killing process with pid 3266936 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3266936 00:19:34.906 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3266936 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.rFnE70yuWz 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.rFnE70yuWz 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3272428 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3272428 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3272428 ']' 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.167 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.167 [2024-11-06 11:01:26.526014] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:35.167 [2024-11-06 11:01:26.526068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.428 [2024-11-06 11:01:26.614558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.428 [2024-11-06 11:01:26.644193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.428 [2024-11-06 11:01:26.644226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.428 [2024-11-06 11:01:26.644232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.428 [2024-11-06 11:01:26.644236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.428 [2024-11-06 11:01:26.644241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.428 [2024-11-06 11:01:26.644707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.rFnE70yuWz 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rFnE70yuWz 00:19:36.000 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.261 [2024-11-06 11:01:27.504367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.261 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.521 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.521 [2024-11-06 11:01:27.829160] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.521 [2024-11-06 11:01:27.829364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.521 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.783 malloc0 00:19:36.783 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.783 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:19:37.043 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.043 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rFnE70yuWz 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rFnE70yuWz 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3272934 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3272934 /var/tmp/bdevperf.sock 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3272934 ']' 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.304 [2024-11-06 11:01:28.524843] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:37.304 [2024-11-06 11:01:28.524912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272934 ] 00:19:37.304 [2024-11-06 11:01:28.585168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.304 [2024-11-06 11:01:28.614180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:37.304 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:19:37.569 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.830 [2024-11-06 11:01:29.031624] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.830 TLSTESTn1 00:19:37.830 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.830 Running I/O for 10 seconds... 00:19:39.791 5387.00 IOPS, 21.04 MiB/s [2024-11-06T10:01:32.597Z] 5969.50 IOPS, 23.32 MiB/s [2024-11-06T10:01:33.542Z] 6162.00 IOPS, 24.07 MiB/s [2024-11-06T10:01:34.483Z] 6142.25 IOPS, 23.99 MiB/s [2024-11-06T10:01:35.427Z] 6223.60 IOPS, 24.31 MiB/s [2024-11-06T10:01:36.368Z] 6115.67 IOPS, 23.89 MiB/s [2024-11-06T10:01:37.309Z] 6032.71 IOPS, 23.57 MiB/s [2024-11-06T10:01:38.250Z] 6002.50 IOPS, 23.45 MiB/s [2024-11-06T10:01:39.633Z] 5985.56 IOPS, 23.38 MiB/s [2024-11-06T10:01:39.633Z] 5970.50 IOPS, 23.32 MiB/s 00:19:48.211 Latency(us) 00:19:48.211 [2024-11-06T10:01:39.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.211 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.211 Verification LBA range: start 0x0 length 0x2000 00:19:48.211 TLSTESTn1 : 10.01 5975.79 23.34 0.00 0.00 21391.02 5242.88 72963.41 00:19:48.211 [2024-11-06T10:01:39.633Z] =================================================================================================================== 00:19:48.211 [2024-11-06T10:01:39.633Z] Total : 5975.79 23.34 0.00 0.00 21391.02 5242.88 72963.41 00:19:48.211 { 00:19:48.211 "results": [ 00:19:48.211 { 00:19:48.211 "job": "TLSTESTn1", 00:19:48.211 "core_mask": "0x4", 00:19:48.211 "workload": "verify", 00:19:48.211 "status": "finished", 00:19:48.211 "verify_range": { 00:19:48.211 "start": 0, 00:19:48.211 "length": 8192 00:19:48.211 }, 00:19:48.211 "queue_depth": 128, 00:19:48.211 "io_size": 4096, 00:19:48.211 "runtime": 10.01256, 00:19:48.211 "iops": 5975.794402230798, 00:19:48.211 "mibps": 23.342946883714056, 00:19:48.211 "io_failed": 0, 00:19:48.211 "io_timeout": 0, 00:19:48.211 "avg_latency_us": 21391.022106641263, 00:19:48.211 "min_latency_us": 5242.88, 00:19:48.211 "max_latency_us": 72963.41333333333 00:19:48.211 } 00:19:48.211 ], 00:19:48.211 "core_count": 1 00:19:48.211 } 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3272934 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3272934 ']' 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3272934 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3272934 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3272934' 00:19:48.211 killing process with pid 3272934 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3272934 00:19:48.211 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.211 00:19:48.211 Latency(us) 00:19:48.211 [2024-11-06T10:01:39.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.211 [2024-11-06T10:01:39.633Z] =================================================================================================================== 00:19:48.211 [2024-11-06T10:01:39.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3272934 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.rFnE70yuWz 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rFnE70yuWz 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rFnE70yuWz 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rFnE70yuWz 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rFnE70yuWz 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3275128 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3275128 /var/tmp/bdevperf.sock 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3275128 ']' 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.211 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.211 [2024-11-06 11:01:39.496442] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:48.211 [2024-11-06 11:01:39.496497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275128 ] 00:19:48.211 [2024-11-06 11:01:39.554723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.211 [2024-11-06 11:01:39.582968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.471 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:48.471 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:48.471 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:19:48.471 [2024-11-06 11:01:39.816056] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rFnE70yuWz': 0100666 00:19:48.471 [2024-11-06 11:01:39.816085] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:48.471 request: 00:19:48.471 { 00:19:48.471 "name": "key0", 00:19:48.471 "path": "/tmp/tmp.rFnE70yuWz", 00:19:48.472 "method": "keyring_file_add_key", 00:19:48.472 "req_id": 1 00:19:48.472 } 00:19:48.472 Got JSON-RPC error response 00:19:48.472 response: 00:19:48.472 { 00:19:48.472 "code": -1, 00:19:48.472 "message": "Operation not permitted" 00:19:48.472 } 00:19:48.472 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.734 [2024-11-06 11:01:40.000591] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.734 [2024-11-06 11:01:40.000614] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:48.734 request: 00:19:48.734 { 00:19:48.734 "name": "TLSTEST", 00:19:48.734 "trtype": "tcp", 00:19:48.734 "traddr": "10.0.0.2", 00:19:48.734 "adrfam": "ipv4", 00:19:48.734 "trsvcid": "4420", 00:19:48.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.734 "prchk_reftag": false, 00:19:48.734 "prchk_guard": false, 00:19:48.734 "hdgst": false, 00:19:48.734 "ddgst": false, 00:19:48.734 "psk": "key0", 00:19:48.734 "allow_unrecognized_csi": false, 00:19:48.734 "method": "bdev_nvme_attach_controller", 00:19:48.734 "req_id": 1 00:19:48.734 } 00:19:48.734 Got JSON-RPC error response 00:19:48.734 response: 00:19:48.734 { 00:19:48.734 "code": -126, 00:19:48.734 "message": "Required key not available" 00:19:48.734 } 00:19:48.734 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3275128 00:19:48.734 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3275128 ']' 00:19:48.734 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3275128 00:19:48.734 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:48.734 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:48.735 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3275128 00:19:48.735 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:48.735 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:48.735 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3275128' 00:19:48.735 killing process with pid 3275128 00:19:48.735 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3275128 00:19:48.735 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.735 00:19:48.735 Latency(us) 00:19:48.735 [2024-11-06T10:01:40.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.735 [2024-11-06T10:01:40.157Z] =================================================================================================================== 00:19:48.735 [2024-11-06T10:01:40.157Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.735 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3275128 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3272428 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3272428 ']' 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3272428 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3272428 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3272428' 00:19:48.994 killing process with pid 3272428 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3272428 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3272428 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.994 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3275177 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3275177 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3275177 ']' 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.995 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.254 [2024-11-06 11:01:40.432100] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:49.255 [2024-11-06 11:01:40.432162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.255 [2024-11-06 11:01:40.521548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.255 [2024-11-06 11:01:40.551170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.255 [2024-11-06 11:01:40.551199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.255 [2024-11-06 11:01:40.551205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.255 [2024-11-06 11:01:40.551210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.255 [2024-11-06 11:01:40.551214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.255 [2024-11-06 11:01:40.551682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.825 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.825 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:49.825 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.825 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.825 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.rFnE70yuWz 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rFnE70yuWz 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.rFnE70yuWz 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rFnE70yuWz 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:50.086 [2024-11-06 11:01:41.411716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.086 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:50.347 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:50.347 [2024-11-06 11:01:41.732507] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.347 [2024-11-06 11:01:41.732702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.347 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:50.607 malloc0 00:19:50.607 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:50.868 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:19:50.868 [2024-11-06 11:01:42.203322] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rFnE70yuWz': 0100666 00:19:50.868 [2024-11-06 11:01:42.203342] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:50.868 request: 00:19:50.868 { 00:19:50.868 "name": "key0", 00:19:50.868 "path": "/tmp/tmp.rFnE70yuWz", 00:19:50.868 "method": "keyring_file_add_key", 00:19:50.868 "req_id": 1 00:19:50.868 } 00:19:50.868 Got JSON-RPC error response 00:19:50.868 response: 00:19:50.868 { 00:19:50.868 "code": -1, 00:19:50.868 "message": "Operation not permitted" 00:19:50.868 } 00:19:50.868 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.129 [2024-11-06 11:01:42.359729] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:51.129 [2024-11-06 11:01:42.359758] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:51.129 request: 00:19:51.129 { 00:19:51.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.129 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.129 "psk": "key0", 00:19:51.129 "method": "nvmf_subsystem_add_host", 00:19:51.129 "req_id": 1 00:19:51.129 } 00:19:51.129 Got JSON-RPC error response 00:19:51.129 response: 00:19:51.129 { 00:19:51.129 "code": -32603, 00:19:51.129 "message": "Internal error" 00:19:51.129 } 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3275177 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3275177 ']' 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3275177 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3275177 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3275177' 00:19:51.129 killing process with pid 3275177 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3275177 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3275177 00:19:51.129 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.rFnE70yuWz 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3275778 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3275778 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3275778 ']' 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.390 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.390 [2024-11-06 11:01:42.609074] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:51.390 [2024-11-06 11:01:42.609129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.390 [2024-11-06 11:01:42.698855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.390 [2024-11-06 11:01:42.732288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.390 [2024-11-06 11:01:42.732325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.390 [2024-11-06 11:01:42.732331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.390 [2024-11-06 11:01:42.732336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.390 [2024-11-06 11:01:42.732341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.390 [2024-11-06 11:01:42.732881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.rFnE70yuWz 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rFnE70yuWz 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.331 [2024-11-06 11:01:43.595194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.331 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.592 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.592 [2024-11-06 11:01:43.932018] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.592 [2024-11-06 11:01:43.932216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.592 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.853 malloc0 00:19:52.853 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.113 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:19:53.113 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3276212 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3276212 /var/tmp/bdevperf.sock 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3276212 ']' 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.374 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.374 [2024-11-06 11:01:44.653524] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:53.374 [2024-11-06 11:01:44.653583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276212 ] 00:19:53.374 [2024-11-06 11:01:44.718111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.374 [2024-11-06 11:01:44.746631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.635 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.635 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:53.635 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:19:53.635 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.895 [2024-11-06 11:01:45.155977] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.895 TLSTESTn1 00:19:53.895 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:54.156 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:54.156 "subsystems": [ 00:19:54.156 { 00:19:54.156 "subsystem": "keyring", 00:19:54.156 "config": [ 00:19:54.156 { 00:19:54.156 "method": "keyring_file_add_key", 00:19:54.156 "params": { 00:19:54.156 "name": "key0", 00:19:54.156 "path": "/tmp/tmp.rFnE70yuWz" 00:19:54.156 } 00:19:54.156 } 00:19:54.156 ] 00:19:54.156 }, 00:19:54.156 { 00:19:54.156 "subsystem": "iobuf", 00:19:54.156 "config": [ 00:19:54.156 { 00:19:54.156 "method": "iobuf_set_options", 00:19:54.156 "params": { 00:19:54.156 "small_pool_count": 8192, 00:19:54.156 "large_pool_count": 1024, 00:19:54.156 "small_bufsize": 8192, 00:19:54.156 "large_bufsize": 135168, 00:19:54.156 "enable_numa": false 00:19:54.156 } 00:19:54.156 } 00:19:54.157 ] 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "subsystem": "sock", 00:19:54.157 "config": [ 00:19:54.157 { 00:19:54.157 "method": "sock_set_default_impl", 00:19:54.157 "params": { 00:19:54.157 "impl_name": "posix" 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "sock_impl_set_options", 00:19:54.157 "params": { 00:19:54.157 "impl_name": "ssl", 00:19:54.157 "recv_buf_size": 4096, 00:19:54.157 "send_buf_size": 4096, 00:19:54.157 "enable_recv_pipe": true, 00:19:54.157 "enable_quickack": false, 00:19:54.157 "enable_placement_id": 0, 00:19:54.157 "enable_zerocopy_send_server": true, 00:19:54.157 "enable_zerocopy_send_client": false, 00:19:54.157 "zerocopy_threshold": 0, 00:19:54.157 "tls_version": 0, 00:19:54.157 "enable_ktls": false 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "sock_impl_set_options", 00:19:54.157 "params": { 00:19:54.157 "impl_name": "posix", 00:19:54.157 "recv_buf_size": 2097152, 00:19:54.157 "send_buf_size": 2097152, 00:19:54.157 "enable_recv_pipe": true, 00:19:54.157 "enable_quickack": false, 00:19:54.157 "enable_placement_id": 0, 00:19:54.157 "enable_zerocopy_send_server": true, 00:19:54.157 "enable_zerocopy_send_client": false, 00:19:54.157 "zerocopy_threshold": 0, 00:19:54.157 "tls_version": 0, 00:19:54.157 "enable_ktls": false 00:19:54.157 } 00:19:54.157 } 00:19:54.157 ] 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "subsystem": "vmd", 00:19:54.157 "config": [] 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "subsystem": "accel", 00:19:54.157 "config": [ 00:19:54.157 { 00:19:54.157 "method": "accel_set_options", 00:19:54.157 "params": { 00:19:54.157 "small_cache_size": 128, 00:19:54.157 "large_cache_size": 16, 00:19:54.157 "task_count": 2048, 00:19:54.157 "sequence_count": 2048, 00:19:54.157 "buf_count": 2048 00:19:54.157 } 00:19:54.157 } 00:19:54.157 ] 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "subsystem": "bdev", 00:19:54.157 "config": [ 00:19:54.157 { 00:19:54.157 "method": "bdev_set_options", 00:19:54.157 "params": { 00:19:54.157 "bdev_io_pool_size": 65535, 00:19:54.157 "bdev_io_cache_size": 256, 00:19:54.157 "bdev_auto_examine": true, 00:19:54.157 "iobuf_small_cache_size": 128, 00:19:54.157 "iobuf_large_cache_size": 16 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "bdev_raid_set_options", 00:19:54.157 "params": { 00:19:54.157 "process_window_size_kb": 1024, 00:19:54.157 "process_max_bandwidth_mb_sec": 0 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "bdev_iscsi_set_options", 00:19:54.157 "params": { 00:19:54.157 "timeout_sec": 30 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "bdev_nvme_set_options", 00:19:54.157 "params": { 00:19:54.157 "action_on_timeout": "none", 00:19:54.157 "timeout_us": 0, 00:19:54.157 "timeout_admin_us": 0, 00:19:54.157 "keep_alive_timeout_ms": 10000, 00:19:54.157 "arbitration_burst": 0, 00:19:54.157 "low_priority_weight": 0, 00:19:54.157 "medium_priority_weight": 0, 00:19:54.157 "high_priority_weight": 0, 00:19:54.157 "nvme_adminq_poll_period_us": 10000, 00:19:54.157 "nvme_ioq_poll_period_us": 0, 00:19:54.157 "io_queue_requests": 0, 00:19:54.157 "delay_cmd_submit": true, 00:19:54.157 "transport_retry_count": 4, 00:19:54.157 "bdev_retry_count": 3, 00:19:54.157 "transport_ack_timeout": 0, 00:19:54.157 "ctrlr_loss_timeout_sec": 0, 00:19:54.157 "reconnect_delay_sec": 0, 00:19:54.157 "fast_io_fail_timeout_sec": 0, 00:19:54.157 "disable_auto_failback": false, 00:19:54.157 "generate_uuids": false, 00:19:54.157 "transport_tos": 0, 00:19:54.157 "nvme_error_stat": false, 00:19:54.157 "rdma_srq_size": 0, 00:19:54.157 "io_path_stat": false, 00:19:54.157 "allow_accel_sequence": false, 00:19:54.157 "rdma_max_cq_size": 0, 00:19:54.157 "rdma_cm_event_timeout_ms": 0, 00:19:54.157 "dhchap_digests": [ 00:19:54.157 "sha256", 00:19:54.157 "sha384", 00:19:54.157 "sha512" 00:19:54.157 ], 00:19:54.157 "dhchap_dhgroups": [ 00:19:54.157 "null", 00:19:54.157 "ffdhe2048", 00:19:54.157 "ffdhe3072", 00:19:54.157 "ffdhe4096", 00:19:54.157 "ffdhe6144", 00:19:54.157 "ffdhe8192" 00:19:54.157 ] 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "bdev_nvme_set_hotplug", 00:19:54.157 "params": { 00:19:54.157 "period_us": 100000, 00:19:54.157 "enable": false 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "bdev_malloc_create", 00:19:54.157 "params": { 00:19:54.157 "name": "malloc0", 00:19:54.157 "num_blocks": 8192, 00:19:54.157 "block_size": 4096, 00:19:54.157 "physical_block_size": 4096, 00:19:54.157 "uuid": "05563168-d54c-4313-add6-54221627547c", 00:19:54.157 "optimal_io_boundary": 0, 00:19:54.157 "md_size": 0, 00:19:54.157 "dif_type": 0, 00:19:54.157 "dif_is_head_of_md": false, 00:19:54.157 "dif_pi_format": 0 00:19:54.157 } 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "method": "bdev_wait_for_examine" 00:19:54.157 } 00:19:54.157 ] 00:19:54.157 }, 00:19:54.157 { 00:19:54.157 "subsystem": "nbd", 00:19:54.157 "config": [] 00:19:54.157 }, 00:19:54.158 { 00:19:54.158 "subsystem": "scheduler", 00:19:54.158 "config": [ 00:19:54.158 { 00:19:54.158 "method": "framework_set_scheduler", 00:19:54.158 "params": { 00:19:54.158 "name": "static" 00:19:54.158 } 00:19:54.158 } 00:19:54.158 ] 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "subsystem": "nvmf", 00:19:54.158 "config": [ 00:19:54.158 { 00:19:54.158 "method": "nvmf_set_config", 00:19:54.158 "params": { 00:19:54.158 "discovery_filter": "match_any", 00:19:54.158 "admin_cmd_passthru": { 00:19:54.158 "identify_ctrlr": false 00:19:54.158 }, 00:19:54.158 "dhchap_digests": [ 00:19:54.158 "sha256", 00:19:54.158 "sha384", 00:19:54.158 "sha512" 00:19:54.158 ], 00:19:54.158 "dhchap_dhgroups": [ 00:19:54.158 "null", 00:19:54.158 "ffdhe2048", 00:19:54.158 "ffdhe3072", 00:19:54.158 "ffdhe4096", 00:19:54.158 "ffdhe6144", 00:19:54.158 "ffdhe8192" 00:19:54.158 ] 00:19:54.158 } 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "method": "nvmf_set_max_subsystems", 00:19:54.158 "params": { 00:19:54.158 "max_subsystems": 1024 00:19:54.158 } 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "method": "nvmf_set_crdt", 00:19:54.158 "params": { 00:19:54.158 "crdt1": 0, 00:19:54.158 "crdt2": 0, 00:19:54.158 "crdt3": 0 00:19:54.158 } 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "method": "nvmf_create_transport", 00:19:54.158 "params": { 00:19:54.158 "trtype": "TCP", 00:19:54.158 "max_queue_depth": 128, 00:19:54.158 "max_io_qpairs_per_ctrlr": 127, 00:19:54.158 "in_capsule_data_size": 4096, 00:19:54.158 "max_io_size": 131072, 00:19:54.158 "io_unit_size": 131072, 00:19:54.158 "max_aq_depth": 128, 00:19:54.158 "num_shared_buffers": 511, 00:19:54.158 "buf_cache_size": 4294967295, 00:19:54.158 "dif_insert_or_strip": false, 00:19:54.158 "zcopy": false, 00:19:54.158 "c2h_success": false, 00:19:54.158 "sock_priority": 0, 00:19:54.158 "abort_timeout_sec": 1, 00:19:54.158 "ack_timeout": 0, 00:19:54.158 "data_wr_pool_size": 0 00:19:54.158 } 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "method": "nvmf_create_subsystem", 00:19:54.158 "params": { 00:19:54.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.158 "allow_any_host": false, 00:19:54.158 "serial_number": "SPDK00000000000001", 00:19:54.158 "model_number": "SPDK bdev Controller", 00:19:54.158 "max_namespaces": 10, 00:19:54.158 "min_cntlid": 1, 00:19:54.158 "max_cntlid": 65519, 00:19:54.158 "ana_reporting": false 00:19:54.158 } 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "method": "nvmf_subsystem_add_host", 00:19:54.158 "params": { 00:19:54.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.158 "host": "nqn.2016-06.io.spdk:host1", 00:19:54.158 "psk": "key0" 00:19:54.158 } 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "method": "nvmf_subsystem_add_ns", 00:19:54.158 "params": { 00:19:54.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.158 "namespace": { 00:19:54.158 "nsid": 1, 00:19:54.158 "bdev_name": "malloc0", 00:19:54.158 "nguid": "05563168D54C4313ADD654221627547C", 00:19:54.158 "uuid": "05563168-d54c-4313-add6-54221627547c", 00:19:54.158 "no_auto_visible": false 00:19:54.158 } 00:19:54.158 } 00:19:54.158 }, 00:19:54.158 { 00:19:54.158 "method": "nvmf_subsystem_add_listener", 00:19:54.158 "params": { 00:19:54.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.158 "listen_address": { 00:19:54.158 "trtype": "TCP", 00:19:54.158 "adrfam": "IPv4", 00:19:54.158 "traddr": "10.0.0.2", 00:19:54.158 "trsvcid": "4420" 00:19:54.158 }, 00:19:54.158 "secure_channel": true 00:19:54.158 } 00:19:54.158 } 00:19:54.158 ] 00:19:54.158 } 00:19:54.158 ] 00:19:54.158 }' 00:19:54.158 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:54.419 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:54.419 "subsystems": [ 00:19:54.419 { 00:19:54.419 "subsystem": "keyring", 00:19:54.419 "config": [ 00:19:54.419 { 00:19:54.419 "method": "keyring_file_add_key", 00:19:54.419 "params": { 00:19:54.419 "name": "key0", 00:19:54.419 "path": "/tmp/tmp.rFnE70yuWz" 00:19:54.419 } 00:19:54.419 } 00:19:54.419 ] 00:19:54.419 }, 00:19:54.419 { 00:19:54.419 "subsystem": "iobuf", 00:19:54.419 "config": [ 00:19:54.419 { 00:19:54.419 "method": "iobuf_set_options", 00:19:54.419 "params": { 00:19:54.419 "small_pool_count": 8192, 00:19:54.419 "large_pool_count": 1024, 00:19:54.419 "small_bufsize": 8192, 00:19:54.419 "large_bufsize": 135168, 00:19:54.419 "enable_numa": false 00:19:54.419 } 00:19:54.419 } 00:19:54.419 ] 00:19:54.419 }, 00:19:54.419 { 00:19:54.419 "subsystem": "sock", 00:19:54.419 "config": [ 00:19:54.419 { 00:19:54.420 "method": "sock_set_default_impl", 00:19:54.420 "params": { 00:19:54.420 "impl_name": "posix" 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "sock_impl_set_options", 00:19:54.420 "params": { 00:19:54.420 "impl_name": "ssl", 00:19:54.420 "recv_buf_size": 4096, 00:19:54.420 "send_buf_size": 4096, 00:19:54.420 "enable_recv_pipe": true, 00:19:54.420 "enable_quickack": false, 00:19:54.420 "enable_placement_id": 0, 00:19:54.420 "enable_zerocopy_send_server": true, 00:19:54.420 "enable_zerocopy_send_client": false, 00:19:54.420 "zerocopy_threshold": 0, 00:19:54.420 "tls_version": 0, 00:19:54.420 "enable_ktls": false 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "sock_impl_set_options", 00:19:54.420 "params": { 00:19:54.420 "impl_name": "posix", 00:19:54.420 "recv_buf_size": 2097152, 00:19:54.420 "send_buf_size": 2097152, 00:19:54.420 "enable_recv_pipe": true, 00:19:54.420 "enable_quickack": false, 00:19:54.420 "enable_placement_id": 0, 00:19:54.420 "enable_zerocopy_send_server": true, 00:19:54.420 "enable_zerocopy_send_client": false, 00:19:54.420 "zerocopy_threshold": 0, 00:19:54.420 "tls_version": 0, 00:19:54.420 "enable_ktls": false 00:19:54.420 } 00:19:54.420 } 00:19:54.420 ] 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "subsystem": "vmd", 00:19:54.420 "config": [] 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "subsystem": "accel", 00:19:54.420 "config": [ 00:19:54.420 { 00:19:54.420 "method": "accel_set_options", 00:19:54.420 "params": { 00:19:54.420 "small_cache_size": 128, 00:19:54.420 "large_cache_size": 16, 00:19:54.420 "task_count": 2048, 00:19:54.420 "sequence_count": 2048, 00:19:54.420 "buf_count": 2048 00:19:54.420 } 00:19:54.420 } 00:19:54.420 ] 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "subsystem": "bdev", 00:19:54.420 "config": [ 00:19:54.420 { 00:19:54.420 "method": "bdev_set_options", 00:19:54.420 "params": { 00:19:54.420 "bdev_io_pool_size": 65535, 00:19:54.420 "bdev_io_cache_size": 256, 00:19:54.420 "bdev_auto_examine": true, 00:19:54.420 "iobuf_small_cache_size": 128, 00:19:54.420 "iobuf_large_cache_size": 16 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "bdev_raid_set_options", 00:19:54.420 "params": { 00:19:54.420 "process_window_size_kb": 1024, 00:19:54.420 "process_max_bandwidth_mb_sec": 0 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "bdev_iscsi_set_options", 00:19:54.420 "params": { 00:19:54.420 "timeout_sec": 30 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "bdev_nvme_set_options", 00:19:54.420 "params": { 00:19:54.420 "action_on_timeout": "none", 00:19:54.420 "timeout_us": 0, 00:19:54.420 "timeout_admin_us": 0, 00:19:54.420 "keep_alive_timeout_ms": 10000, 00:19:54.420 "arbitration_burst": 0, 00:19:54.420 "low_priority_weight": 0, 00:19:54.420 "medium_priority_weight": 0, 00:19:54.420 "high_priority_weight": 0, 00:19:54.420 "nvme_adminq_poll_period_us": 10000, 00:19:54.420 "nvme_ioq_poll_period_us": 0, 00:19:54.420 "io_queue_requests": 512, 00:19:54.420 "delay_cmd_submit": true, 00:19:54.420 "transport_retry_count": 4, 00:19:54.420 "bdev_retry_count": 3, 00:19:54.420 "transport_ack_timeout": 0, 00:19:54.420 "ctrlr_loss_timeout_sec": 0, 00:19:54.420 "reconnect_delay_sec": 0, 00:19:54.420 "fast_io_fail_timeout_sec": 0, 00:19:54.420 "disable_auto_failback": false, 00:19:54.420 "generate_uuids": false, 00:19:54.420 "transport_tos": 0, 00:19:54.420 "nvme_error_stat": false, 00:19:54.420 "rdma_srq_size": 0, 00:19:54.420 "io_path_stat": false, 00:19:54.420 "allow_accel_sequence": false, 00:19:54.420 "rdma_max_cq_size": 0, 00:19:54.420 "rdma_cm_event_timeout_ms": 0, 00:19:54.420 "dhchap_digests": [ 00:19:54.420 "sha256", 00:19:54.420 "sha384", 00:19:54.420 "sha512" 00:19:54.420 ], 00:19:54.420 "dhchap_dhgroups": [ 00:19:54.420 "null", 00:19:54.420 "ffdhe2048", 00:19:54.420 "ffdhe3072", 00:19:54.420 "ffdhe4096", 00:19:54.420 "ffdhe6144", 00:19:54.420 "ffdhe8192" 00:19:54.420 ] 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "bdev_nvme_attach_controller", 00:19:54.420 "params": { 00:19:54.420 "name": "TLSTEST", 00:19:54.420 "trtype": "TCP", 00:19:54.420 "adrfam": "IPv4", 00:19:54.420 "traddr": "10.0.0.2", 00:19:54.420 "trsvcid": "4420", 00:19:54.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.420 "prchk_reftag": false, 00:19:54.420 "prchk_guard": false, 00:19:54.420 "ctrlr_loss_timeout_sec": 0, 00:19:54.420 "reconnect_delay_sec": 0, 00:19:54.420 "fast_io_fail_timeout_sec": 0, 00:19:54.420 "psk": "key0", 00:19:54.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.420 "hdgst": false, 00:19:54.420 "ddgst": false, 00:19:54.420 "multipath": "multipath" 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "bdev_nvme_set_hotplug", 00:19:54.420 "params": { 00:19:54.420 "period_us": 100000, 00:19:54.420 "enable": false 00:19:54.420 } 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "method": "bdev_wait_for_examine" 00:19:54.420 } 00:19:54.420 ] 00:19:54.420 }, 00:19:54.420 { 00:19:54.420 "subsystem": "nbd", 00:19:54.420 "config": [] 00:19:54.420 } 00:19:54.420 ] 00:19:54.420 }' 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3276212 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3276212 ']' 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3276212 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3276212 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3276212' 00:19:54.420 killing process with pid 3276212 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3276212 00:19:54.420 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.420 00:19:54.420 Latency(us) 00:19:54.420 [2024-11-06T10:01:45.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.420 [2024-11-06T10:01:45.842Z] =================================================================================================================== 00:19:54.420 [2024-11-06T10:01:45.842Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.420 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3276212 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3275778 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3275778 ']' 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3275778 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3275778 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3275778' 00:19:54.682 killing process with pid 3275778 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3275778 00:19:54.682 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3275778 00:19:54.682 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:54.682 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:54.682 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:54.682 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.682 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:54.682 "subsystems": [ 00:19:54.682 { 00:19:54.682 "subsystem": "keyring", 00:19:54.682 "config": [ 00:19:54.682 { 00:19:54.682 "method": "keyring_file_add_key", 00:19:54.682 "params": { 00:19:54.682 "name": "key0", 00:19:54.682 "path": "/tmp/tmp.rFnE70yuWz" 00:19:54.682 } 00:19:54.682 } 00:19:54.682 ] 00:19:54.682 }, 00:19:54.682 { 00:19:54.682 "subsystem": "iobuf", 00:19:54.682 "config": [ 00:19:54.682 { 00:19:54.682 "method": "iobuf_set_options", 00:19:54.682 "params": { 00:19:54.682 "small_pool_count": 8192, 00:19:54.682 "large_pool_count": 1024, 00:19:54.682 "small_bufsize": 8192, 00:19:54.682 "large_bufsize": 135168, 00:19:54.682 "enable_numa": false 00:19:54.682 } 00:19:54.682 } 00:19:54.682 ] 00:19:54.682 }, 00:19:54.682 { 00:19:54.682 "subsystem": "sock", 00:19:54.682 "config": [ 00:19:54.682 { 00:19:54.682 "method": "sock_set_default_impl", 00:19:54.682 "params": { 00:19:54.682 "impl_name": "posix" 00:19:54.682 } 00:19:54.682 }, 00:19:54.682 { 00:19:54.682 "method": "sock_impl_set_options", 00:19:54.682 "params": { 00:19:54.682 "impl_name": "ssl", 00:19:54.682 "recv_buf_size": 4096, 00:19:54.682 "send_buf_size": 4096, 00:19:54.682 "enable_recv_pipe": true, 00:19:54.682 "enable_quickack": false, 00:19:54.682 "enable_placement_id": 0, 00:19:54.682 "enable_zerocopy_send_server": true, 00:19:54.682 "enable_zerocopy_send_client": false, 00:19:54.682 "zerocopy_threshold": 0, 00:19:54.682 "tls_version": 0, 00:19:54.682 "enable_ktls": false 00:19:54.682 } 00:19:54.682 }, 00:19:54.682 { 00:19:54.682 "method": "sock_impl_set_options", 00:19:54.682 "params": { 00:19:54.682 "impl_name": "posix", 00:19:54.682 "recv_buf_size": 2097152, 00:19:54.682 "send_buf_size": 2097152, 00:19:54.682 "enable_recv_pipe": true, 00:19:54.682 "enable_quickack": false, 00:19:54.682 "enable_placement_id": 0, 00:19:54.682 "enable_zerocopy_send_server": true, 00:19:54.682 "enable_zerocopy_send_client": false, 00:19:54.682 "zerocopy_threshold": 0, 00:19:54.683 "tls_version": 0, 00:19:54.683 "enable_ktls": false 00:19:54.683 } 00:19:54.683 } 00:19:54.683 ] 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "subsystem": "vmd", 00:19:54.683 "config": [] 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "subsystem": "accel", 00:19:54.683 "config": [ 00:19:54.683 { 00:19:54.683 "method": "accel_set_options", 00:19:54.683 "params": { 00:19:54.683 "small_cache_size": 128, 00:19:54.683 "large_cache_size": 16, 00:19:54.683 "task_count": 2048, 00:19:54.683 "sequence_count": 2048, 00:19:54.683 "buf_count": 2048 00:19:54.683 } 00:19:54.683 } 00:19:54.683 ] 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "subsystem": "bdev", 00:19:54.683 "config": [ 00:19:54.683 { 00:19:54.683 "method": "bdev_set_options", 00:19:54.683 "params": { 00:19:54.683 "bdev_io_pool_size": 65535, 00:19:54.683 "bdev_io_cache_size": 256, 00:19:54.683 "bdev_auto_examine": true, 00:19:54.683 "iobuf_small_cache_size": 128, 00:19:54.683 "iobuf_large_cache_size": 16 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "bdev_raid_set_options", 00:19:54.683 "params": { 00:19:54.683 "process_window_size_kb": 1024, 00:19:54.683 "process_max_bandwidth_mb_sec": 0 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "bdev_iscsi_set_options", 00:19:54.683 "params": { 00:19:54.683 "timeout_sec": 30 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "bdev_nvme_set_options", 00:19:54.683 "params": { 00:19:54.683 "action_on_timeout": "none", 00:19:54.683 "timeout_us": 0, 00:19:54.683 "timeout_admin_us": 0, 00:19:54.683 "keep_alive_timeout_ms": 10000, 00:19:54.683 "arbitration_burst": 0, 00:19:54.683 "low_priority_weight": 0, 00:19:54.683 "medium_priority_weight": 0, 00:19:54.683 "high_priority_weight": 0, 00:19:54.683 "nvme_adminq_poll_period_us": 10000, 00:19:54.683 "nvme_ioq_poll_period_us": 0, 00:19:54.683 "io_queue_requests": 0, 00:19:54.683 "delay_cmd_submit": true, 00:19:54.683 "transport_retry_count": 4, 00:19:54.683 "bdev_retry_count": 3, 00:19:54.683 "transport_ack_timeout": 0, 00:19:54.683 "ctrlr_loss_timeout_sec": 0, 00:19:54.683 "reconnect_delay_sec": 0, 00:19:54.683 "fast_io_fail_timeout_sec": 0, 00:19:54.683 "disable_auto_failback": false, 00:19:54.683 "generate_uuids": false, 00:19:54.683 "transport_tos": 0, 00:19:54.683 "nvme_error_stat": false, 00:19:54.683 "rdma_srq_size": 0, 00:19:54.683 "io_path_stat": false, 00:19:54.683 "allow_accel_sequence": false, 00:19:54.683 "rdma_max_cq_size": 0, 00:19:54.683 "rdma_cm_event_timeout_ms": 0, 00:19:54.683 "dhchap_digests": [ 00:19:54.683 "sha256", 00:19:54.683 "sha384", 00:19:54.683 "sha512" 00:19:54.683 ], 00:19:54.683 "dhchap_dhgroups": [ 00:19:54.683 "null", 00:19:54.683 "ffdhe2048", 00:19:54.683 "ffdhe3072", 00:19:54.683 "ffdhe4096", 00:19:54.683 "ffdhe6144", 00:19:54.683 "ffdhe8192" 00:19:54.683 ] 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "bdev_nvme_set_hotplug", 00:19:54.683 "params": { 00:19:54.683 "period_us": 100000, 00:19:54.683 "enable": false 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "bdev_malloc_create", 00:19:54.683 "params": { 00:19:54.683 "name": "malloc0", 00:19:54.683 "num_blocks": 8192, 00:19:54.683 "block_size": 4096, 00:19:54.683 "physical_block_size": 4096, 00:19:54.683 "uuid": "05563168-d54c-4313-add6-54221627547c", 00:19:54.683 "optimal_io_boundary": 0, 00:19:54.683 "md_size": 0, 00:19:54.683 "dif_type": 0, 00:19:54.683 "dif_is_head_of_md": false, 00:19:54.683 "dif_pi_format": 0 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "bdev_wait_for_examine" 00:19:54.683 } 00:19:54.683 ] 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "subsystem": "nbd", 00:19:54.683 "config": [] 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "subsystem": "scheduler", 00:19:54.683 "config": [ 00:19:54.683 { 00:19:54.683 "method": "framework_set_scheduler", 00:19:54.683 "params": { 00:19:54.683 "name": "static" 00:19:54.683 } 00:19:54.683 } 00:19:54.683 ] 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "subsystem": "nvmf", 00:19:54.683 "config": [ 00:19:54.683 { 00:19:54.683 "method": "nvmf_set_config", 00:19:54.683 "params": { 00:19:54.683 "discovery_filter": "match_any", 00:19:54.683 "admin_cmd_passthru": { 00:19:54.683 "identify_ctrlr": false 00:19:54.683 }, 00:19:54.683 "dhchap_digests": [ 00:19:54.683 "sha256", 00:19:54.683 "sha384", 00:19:54.683 "sha512" 00:19:54.683 ], 00:19:54.683 "dhchap_dhgroups": [ 00:19:54.683 "null", 00:19:54.683 "ffdhe2048", 00:19:54.683 "ffdhe3072", 00:19:54.683 "ffdhe4096", 00:19:54.683 "ffdhe6144", 00:19:54.683 "ffdhe8192" 00:19:54.683 ] 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "nvmf_set_max_subsystems", 00:19:54.683 "params": { 00:19:54.683 "max_subsystems": 1024 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "nvmf_set_crdt", 00:19:54.683 "params": { 00:19:54.683 "crdt1": 0, 00:19:54.683 "crdt2": 0, 00:19:54.683 "crdt3": 0 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "nvmf_create_transport", 00:19:54.683 "params": { 00:19:54.683 "trtype": "TCP", 00:19:54.683 "max_queue_depth": 128, 00:19:54.683 "max_io_qpairs_per_ctrlr": 127, 00:19:54.683 "in_capsule_data_size": 4096, 00:19:54.683 "max_io_size": 131072, 00:19:54.683 "io_unit_size": 131072, 00:19:54.683 "max_aq_depth": 128, 00:19:54.683 "num_shared_buffers": 511, 00:19:54.683 "buf_cache_size": 4294967295, 00:19:54.683 "dif_insert_or_strip": false, 00:19:54.683 "zcopy": false, 00:19:54.683 "c2h_success": false, 00:19:54.683 "sock_priority": 0, 00:19:54.683 "abort_timeout_sec": 1, 00:19:54.683 "ack_timeout": 0, 00:19:54.683 "data_wr_pool_size": 0 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "nvmf_create_subsystem", 00:19:54.683 "params": { 00:19:54.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.683 "allow_any_host": false, 00:19:54.683 "serial_number": "SPDK00000000000001", 00:19:54.683 "model_number": "SPDK bdev Controller", 00:19:54.683 "max_namespaces": 10, 00:19:54.683 "min_cntlid": 1, 00:19:54.683 "max_cntlid": 65519, 00:19:54.683 "ana_reporting": false 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "nvmf_subsystem_add_host", 00:19:54.683 "params": { 00:19:54.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.683 "host": "nqn.2016-06.io.spdk:host1", 00:19:54.683 "psk": "key0" 00:19:54.683 } 00:19:54.683 }, 00:19:54.683 { 00:19:54.683 "method": "nvmf_subsystem_add_ns", 00:19:54.683 "params": { 00:19:54.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.683 "namespace": { 00:19:54.683 "nsid": 1, 00:19:54.683 "bdev_name": "malloc0", 00:19:54.683 "nguid": "05563168D54C4313ADD654221627547C", 00:19:54.684 "uuid": "05563168-d54c-4313-add6-54221627547c", 00:19:54.684 "no_auto_visible": false 00:19:54.684 } 00:19:54.684 } 00:19:54.684 }, 00:19:54.684 { 00:19:54.684 "method": "nvmf_subsystem_add_listener", 00:19:54.684 "params": { 00:19:54.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.684 "listen_address": { 00:19:54.684 "trtype": "TCP", 00:19:54.684 "adrfam": "IPv4", 00:19:54.684 "traddr": "10.0.0.2", 00:19:54.684 "trsvcid": "4420" 00:19:54.684 }, 00:19:54.684 "secure_channel": true 00:19:54.684 } 00:19:54.684 } 00:19:54.684 ] 00:19:54.684 } 00:19:54.684 ] 00:19:54.684 }' 00:19:54.944 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3276537 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3276537 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3276537 ']' 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.945 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.945 [2024-11-06 11:01:46.157352] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:54.945 [2024-11-06 11:01:46.157409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.945 [2024-11-06 11:01:46.247533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.945 [2024-11-06 11:01:46.276459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.945 [2024-11-06 11:01:46.276488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.945 [2024-11-06 11:01:46.276494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.945 [2024-11-06 11:01:46.276499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.945 [2024-11-06 11:01:46.276503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.945 [2024-11-06 11:01:46.276978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.207 [2024-11-06 11:01:46.469507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.207 [2024-11-06 11:01:46.501535] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.207 [2024-11-06 11:01:46.501753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3276589 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3276589 /var/tmp/bdevperf.sock 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3276589 ']' 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.780 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:55.780 "subsystems": [ 00:19:55.780 { 00:19:55.780 "subsystem": "keyring", 00:19:55.780 "config": [ 00:19:55.780 { 00:19:55.780 "method": "keyring_file_add_key", 00:19:55.780 "params": { 00:19:55.780 "name": "key0", 00:19:55.780 "path": "/tmp/tmp.rFnE70yuWz" 00:19:55.780 } 00:19:55.780 } 00:19:55.780 ] 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "subsystem": "iobuf", 00:19:55.780 "config": [ 00:19:55.780 { 00:19:55.780 "method": "iobuf_set_options", 00:19:55.780 "params": { 00:19:55.780 "small_pool_count": 8192, 00:19:55.780 "large_pool_count": 1024, 00:19:55.780 "small_bufsize": 8192, 00:19:55.780 "large_bufsize": 135168, 00:19:55.780 "enable_numa": false 00:19:55.780 } 00:19:55.780 } 00:19:55.780 ] 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "subsystem": "sock", 00:19:55.780 "config": [ 00:19:55.780 { 00:19:55.780 "method": "sock_set_default_impl", 00:19:55.780 "params": { 00:19:55.780 "impl_name": "posix" 00:19:55.780 } 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "method": "sock_impl_set_options", 00:19:55.780 "params": { 00:19:55.780 "impl_name": "ssl", 00:19:55.780 "recv_buf_size": 4096, 00:19:55.780 "send_buf_size": 4096, 00:19:55.780 "enable_recv_pipe": true, 00:19:55.780 "enable_quickack": false, 00:19:55.780 "enable_placement_id": 0, 00:19:55.780 "enable_zerocopy_send_server": true, 00:19:55.780 "enable_zerocopy_send_client": false, 00:19:55.780 "zerocopy_threshold": 0, 00:19:55.780 "tls_version": 0, 00:19:55.780 "enable_ktls": false 00:19:55.780 } 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "method": "sock_impl_set_options", 00:19:55.780 "params": { 00:19:55.780 "impl_name": "posix", 00:19:55.780 "recv_buf_size": 2097152, 00:19:55.780 "send_buf_size": 2097152, 00:19:55.780 "enable_recv_pipe": true, 00:19:55.780 "enable_quickack": false, 00:19:55.780 "enable_placement_id": 0, 00:19:55.780 "enable_zerocopy_send_server": true, 00:19:55.780 "enable_zerocopy_send_client": false, 00:19:55.780 "zerocopy_threshold": 0, 00:19:55.780 "tls_version": 0, 00:19:55.780 "enable_ktls": false 00:19:55.780 } 00:19:55.780 } 00:19:55.780 ] 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "subsystem": "vmd", 00:19:55.780 "config": [] 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "subsystem": "accel", 00:19:55.780 "config": [ 00:19:55.780 { 00:19:55.780 "method": "accel_set_options", 00:19:55.780 "params": { 00:19:55.780 "small_cache_size": 128, 00:19:55.780 "large_cache_size": 16, 00:19:55.780 "task_count": 2048, 00:19:55.780 "sequence_count": 2048, 00:19:55.780 "buf_count": 2048 00:19:55.780 } 00:19:55.780 } 00:19:55.780 ] 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "subsystem": "bdev", 00:19:55.780 "config": [ 00:19:55.780 { 00:19:55.780 "method": "bdev_set_options", 00:19:55.780 "params": { 00:19:55.780 "bdev_io_pool_size": 65535, 00:19:55.780 "bdev_io_cache_size": 256, 00:19:55.780 "bdev_auto_examine": true, 00:19:55.780 "iobuf_small_cache_size": 128, 00:19:55.780 "iobuf_large_cache_size": 16 00:19:55.780 } 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "method": "bdev_raid_set_options", 00:19:55.780 "params": { 00:19:55.780 "process_window_size_kb": 1024, 00:19:55.780 "process_max_bandwidth_mb_sec": 0 00:19:55.780 } 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "method": "bdev_iscsi_set_options", 00:19:55.780 "params": { 00:19:55.780 "timeout_sec": 30 00:19:55.780 } 00:19:55.780 }, 00:19:55.780 { 00:19:55.780 "method": "bdev_nvme_set_options", 00:19:55.780 "params": { 00:19:55.780 "action_on_timeout": "none", 00:19:55.780 "timeout_us": 0, 00:19:55.780 "timeout_admin_us": 0, 00:19:55.780 "keep_alive_timeout_ms": 10000, 00:19:55.780 "arbitration_burst": 0, 00:19:55.780 "low_priority_weight": 0, 00:19:55.780 "medium_priority_weight": 0, 00:19:55.780 "high_priority_weight": 0, 00:19:55.780 "nvme_adminq_poll_period_us": 10000, 00:19:55.780 "nvme_ioq_poll_period_us": 0, 00:19:55.780 "io_queue_requests": 512, 00:19:55.780 "delay_cmd_submit": true, 00:19:55.780 "transport_retry_count": 4, 00:19:55.780 "bdev_retry_count": 3, 00:19:55.780 "transport_ack_timeout": 0, 00:19:55.780 "ctrlr_loss_timeout_sec": 0, 00:19:55.780 "reconnect_delay_sec": 0, 00:19:55.780 "fast_io_fail_timeout_sec": 0, 00:19:55.780 "disable_auto_failback": false, 00:19:55.780 "generate_uuids": false, 00:19:55.780 "transport_tos": 0, 00:19:55.780 "nvme_error_stat": false, 00:19:55.780 "rdma_srq_size": 0, 00:19:55.780 "io_path_stat": false, 00:19:55.780 "allow_accel_sequence": false, 00:19:55.780 "rdma_max_cq_size": 0, 00:19:55.780 "rdma_cm_event_timeout_ms": 0, 00:19:55.780 "dhchap_digests": [ 00:19:55.780 "sha256", 00:19:55.780 "sha384", 00:19:55.781 "sha512" 00:19:55.781 ], 00:19:55.781 "dhchap_dhgroups": [ 00:19:55.781 "null", 00:19:55.781 "ffdhe2048", 00:19:55.781 "ffdhe3072", 00:19:55.781 "ffdhe4096", 00:19:55.781 "ffdhe6144", 00:19:55.781 "ffdhe8192" 00:19:55.781 ] 00:19:55.781 } 00:19:55.781 }, 00:19:55.781 { 00:19:55.781 "method": "bdev_nvme_attach_controller", 00:19:55.781 "params": { 00:19:55.781 "name": "TLSTEST", 00:19:55.781 "trtype": "TCP", 00:19:55.781 "adrfam": "IPv4", 00:19:55.781 "traddr": "10.0.0.2", 00:19:55.781 "trsvcid": "4420", 00:19:55.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.781 "prchk_reftag": false, 00:19:55.781 "prchk_guard": false, 00:19:55.781 "ctrlr_loss_timeout_sec": 0, 00:19:55.781 "reconnect_delay_sec": 0, 00:19:55.781 "fast_io_fail_timeout_sec": 0, 00:19:55.781 "psk": "key0", 00:19:55.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.781 "hdgst": false, 00:19:55.781 "ddgst": false, 00:19:55.781 "multipath": "multipath" 00:19:55.781 } 00:19:55.781 }, 00:19:55.781 { 00:19:55.781 "method": "bdev_nvme_set_hotplug", 00:19:55.781 "params": { 00:19:55.781 "period_us": 100000, 00:19:55.781 "enable": false 00:19:55.781 } 00:19:55.781 }, 00:19:55.781 { 00:19:55.781 "method": "bdev_wait_for_examine" 00:19:55.781 } 00:19:55.781 ] 00:19:55.781 }, 00:19:55.781 { 00:19:55.781 "subsystem": "nbd", 00:19:55.781 "config": [] 00:19:55.781 } 00:19:55.781 ] 00:19:55.781 }' 00:19:55.781 [2024-11-06 11:01:47.027018] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:19:55.781 [2024-11-06 11:01:47.027070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276589 ] 00:19:55.781 [2024-11-06 11:01:47.084266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.781 [2024-11-06 11:01:47.113268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.042 [2024-11-06 11:01:47.247281] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.614 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:56.614 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:56.614 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.614 Running I/O for 10 seconds... 00:19:58.547 5853.00 IOPS, 22.86 MiB/s [2024-11-06T10:01:50.913Z] 5962.00 IOPS, 23.29 MiB/s [2024-11-06T10:01:52.300Z] 5890.67 IOPS, 23.01 MiB/s [2024-11-06T10:01:53.244Z] 5990.00 IOPS, 23.40 MiB/s [2024-11-06T10:01:54.187Z] 5934.40 IOPS, 23.18 MiB/s [2024-11-06T10:01:55.131Z] 5939.50 IOPS, 23.20 MiB/s [2024-11-06T10:01:56.074Z] 5956.86 IOPS, 23.27 MiB/s [2024-11-06T10:01:57.018Z] 5964.88 IOPS, 23.30 MiB/s [2024-11-06T10:01:57.962Z] 5994.67 IOPS, 23.42 MiB/s [2024-11-06T10:01:57.962Z] 5936.90 IOPS, 23.19 MiB/s 00:20:06.540 Latency(us) 00:20:06.540 [2024-11-06T10:01:57.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.540 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:06.540 Verification LBA range: start 0x0 length 0x2000 00:20:06.540 TLSTESTn1 : 10.01 5941.29 23.21 0.00 0.00 21512.46 4642.13 29491.20 00:20:06.540 [2024-11-06T10:01:57.962Z] =================================================================================================================== 00:20:06.540 [2024-11-06T10:01:57.962Z] Total : 5941.29 23.21 0.00 0.00 21512.46 4642.13 29491.20 00:20:06.540 { 00:20:06.540 "results": [ 00:20:06.540 { 00:20:06.540 "job": "TLSTESTn1", 00:20:06.540 "core_mask": "0x4", 00:20:06.540 "workload": "verify", 00:20:06.540 "status": "finished", 00:20:06.540 "verify_range": { 00:20:06.540 "start": 0, 00:20:06.540 "length": 8192 00:20:06.540 }, 00:20:06.540 "queue_depth": 128, 00:20:06.540 "io_size": 4096, 00:20:06.540 "runtime": 10.01399, 00:20:06.540 "iops": 5941.288137895085, 00:20:06.540 "mibps": 23.208156788652676, 00:20:06.540 "io_failed": 0, 00:20:06.540 "io_timeout": 0, 00:20:06.540 "avg_latency_us": 21512.459669221458, 00:20:06.540 "min_latency_us": 4642.133333333333, 00:20:06.540 "max_latency_us": 29491.2 00:20:06.540 } 00:20:06.540 ], 00:20:06.540 "core_count": 1 00:20:06.540 } 00:20:06.540 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.540 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3276589 00:20:06.540 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3276589 ']' 00:20:06.540 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3276589 00:20:06.540 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.540 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.540 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3276589 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3276589' 00:20:06.801 killing process with pid 3276589 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3276589 00:20:06.801 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.801 00:20:06.801 Latency(us) 00:20:06.801 [2024-11-06T10:01:58.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.801 [2024-11-06T10:01:58.223Z] =================================================================================================================== 00:20:06.801 [2024-11-06T10:01:58.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3276589 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3276537 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3276537 ']' 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3276537 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3276537 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3276537' 00:20:06.801 killing process with pid 3276537 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3276537 00:20:06.801 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3276537 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3278933 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3278933 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3278933 ']' 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.062 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.062 [2024-11-06 11:01:58.350018] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:07.062 [2024-11-06 11:01:58.350077] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.062 [2024-11-06 11:01:58.425100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.062 [2024-11-06 11:01:58.459632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.062 [2024-11-06 11:01:58.459668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.062 [2024-11-06 11:01:58.459675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.062 [2024-11-06 11:01:58.459682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.062 [2024-11-06 11:01:58.459688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.062 [2024-11-06 11:01:58.460261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.rFnE70yuWz 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rFnE70yuWz 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:08.005 [2024-11-06 11:01:59.320623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.005 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:08.266 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:08.526 [2024-11-06 11:01:59.689555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.526 [2024-11-06 11:01:59.689790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.526 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:08.526 malloc0 00:20:08.526 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.787 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3279302 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3279302 /var/tmp/bdevperf.sock 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3279302 ']' 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.048 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.309 [2024-11-06 11:02:00.482403] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:09.309 [2024-11-06 11:02:00.482458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279302 ] 00:20:09.309 [2024-11-06 11:02:00.568006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.309 [2024-11-06 11:02:00.597512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.881 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.881 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:09.881 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:20:10.142 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:10.403 [2024-11-06 11:02:01.596986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.403 nvme0n1 00:20:10.403 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:10.403 Running I/O for 1 seconds... 00:20:11.790 5017.00 IOPS, 19.60 MiB/s 00:20:11.790 Latency(us) 00:20:11.790 [2024-11-06T10:02:03.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.790 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.790 Verification LBA range: start 0x0 length 0x2000 00:20:11.790 nvme0n1 : 1.02 5054.87 19.75 0.00 0.00 25145.99 4669.44 26978.99 00:20:11.790 [2024-11-06T10:02:03.212Z] =================================================================================================================== 00:20:11.790 [2024-11-06T10:02:03.212Z] Total : 5054.87 19.75 0.00 0.00 25145.99 4669.44 26978.99 00:20:11.790 { 00:20:11.790 "results": [ 00:20:11.790 { 00:20:11.790 "job": "nvme0n1", 00:20:11.790 "core_mask": "0x2", 00:20:11.790 "workload": "verify", 00:20:11.790 "status": "finished", 00:20:11.790 "verify_range": { 00:20:11.790 "start": 0, 00:20:11.790 "length": 8192 00:20:11.790 }, 00:20:11.790 "queue_depth": 128, 00:20:11.790 "io_size": 4096, 00:20:11.790 "runtime": 1.017831, 00:20:11.790 "iops": 5054.866672365059, 00:20:11.790 "mibps": 19.74557293892601, 00:20:11.790 "io_failed": 0, 00:20:11.790 "io_timeout": 0, 00:20:11.790 "avg_latency_us": 25145.987524457403, 00:20:11.790 "min_latency_us": 4669.44, 00:20:11.790 "max_latency_us": 26978.986666666668 00:20:11.790 } 00:20:11.790 ], 00:20:11.790 "core_count": 1 00:20:11.790 } 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3279302 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3279302 ']' 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3279302 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3279302 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3279302' 00:20:11.790 killing process with pid 3279302 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3279302 00:20:11.790 Received shutdown signal, test time was about 1.000000 seconds 00:20:11.790 00:20:11.790 Latency(us) 00:20:11.790 [2024-11-06T10:02:03.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.790 [2024-11-06T10:02:03.212Z] =================================================================================================================== 00:20:11.790 [2024-11-06T10:02:03.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3279302 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3278933 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3278933 ']' 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3278933 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.790 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3278933 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3278933' 00:20:11.790 killing process with pid 3278933 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3278933 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3278933 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3279854 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3279854 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3279854 ']' 00:20:11.790 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.791 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:11.791 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.791 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:11.791 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.052 [2024-11-06 11:02:03.223006] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:12.052 [2024-11-06 11:02:03.223065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.052 [2024-11-06 11:02:03.301311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.052 [2024-11-06 11:02:03.336089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.052 [2024-11-06 11:02:03.336124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.052 [2024-11-06 11:02:03.336132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.052 [2024-11-06 11:02:03.336139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.052 [2024-11-06 11:02:03.336145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.052 [2024-11-06 11:02:03.336695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.625 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.625 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:12.625 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.625 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.625 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.625 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.625 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:12.625 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.625 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.887 [2024-11-06 11:02:04.048790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.887 malloc0 00:20:12.887 [2024-11-06 11:02:04.075493] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.887 [2024-11-06 11:02:04.075729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3280012 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3280012 /var/tmp/bdevperf.sock 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3280012 ']' 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:12.887 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.887 [2024-11-06 11:02:04.163420] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:12.887 [2024-11-06 11:02:04.163485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280012 ] 00:20:12.887 [2024-11-06 11:02:04.245670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.887 [2024-11-06 11:02:04.275416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.831 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:13.831 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:13.831 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rFnE70yuWz 00:20:13.831 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:14.092 [2024-11-06 11:02:05.262802] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.092 nvme0n1 00:20:14.092 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.092 Running I/O for 1 seconds... 00:20:15.298 4561.00 IOPS, 17.82 MiB/s 00:20:15.298 Latency(us) 00:20:15.298 [2024-11-06T10:02:06.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.298 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.298 Verification LBA range: start 0x0 length 0x2000 00:20:15.298 nvme0n1 : 1.02 4581.20 17.90 0.00 0.00 27663.73 7045.12 38447.79 00:20:15.298 [2024-11-06T10:02:06.720Z] =================================================================================================================== 00:20:15.298 [2024-11-06T10:02:06.720Z] Total : 4581.20 17.90 0.00 0.00 27663.73 7045.12 38447.79 00:20:15.298 { 00:20:15.298 "results": [ 00:20:15.298 { 00:20:15.298 "job": "nvme0n1", 00:20:15.298 "core_mask": "0x2", 00:20:15.298 "workload": "verify", 00:20:15.298 "status": "finished", 00:20:15.298 "verify_range": { 00:20:15.298 "start": 0, 00:20:15.298 "length": 8192 00:20:15.298 }, 00:20:15.298 "queue_depth": 128, 00:20:15.298 "io_size": 4096, 00:20:15.298 "runtime": 1.023531, 00:20:15.298 "iops": 4581.199787793433, 00:20:15.298 "mibps": 17.895311671068097, 00:20:15.298 "io_failed": 0, 00:20:15.298 "io_timeout": 0, 00:20:15.298 "avg_latency_us": 27663.726520224638, 00:20:15.298 "min_latency_us": 7045.12, 00:20:15.298 "max_latency_us": 38447.78666666667 00:20:15.298 } 00:20:15.298 ], 00:20:15.298 "core_count": 1 00:20:15.298 } 00:20:15.298 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:15.298 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.298 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.298 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.298 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:15.298 "subsystems": [ 00:20:15.298 { 00:20:15.298 "subsystem": "keyring", 00:20:15.298 "config": [ 00:20:15.298 { 00:20:15.298 "method": "keyring_file_add_key", 00:20:15.298 "params": { 00:20:15.298 "name": "key0", 00:20:15.298 "path": "/tmp/tmp.rFnE70yuWz" 00:20:15.298 } 00:20:15.298 } 00:20:15.298 ] 00:20:15.298 }, 00:20:15.298 { 00:20:15.298 "subsystem": "iobuf", 00:20:15.298 "config": [ 00:20:15.298 { 00:20:15.298 "method": "iobuf_set_options", 00:20:15.298 "params": { 00:20:15.298 "small_pool_count": 8192, 00:20:15.298 "large_pool_count": 1024, 00:20:15.298 "small_bufsize": 8192, 00:20:15.298 "large_bufsize": 135168, 00:20:15.298 "enable_numa": false 00:20:15.298 } 00:20:15.298 } 00:20:15.298 ] 00:20:15.298 }, 00:20:15.298 { 00:20:15.298 "subsystem": "sock", 00:20:15.298 "config": [ 00:20:15.298 { 00:20:15.298 "method": "sock_set_default_impl", 00:20:15.298 "params": { 00:20:15.298 "impl_name": "posix" 00:20:15.298 } 00:20:15.298 }, 00:20:15.298 { 00:20:15.298 "method": "sock_impl_set_options", 00:20:15.298 "params": { 00:20:15.298 "impl_name": "ssl", 00:20:15.298 "recv_buf_size": 4096, 00:20:15.298 "send_buf_size": 4096, 00:20:15.298 "enable_recv_pipe": true, 00:20:15.298 "enable_quickack": false, 00:20:15.298 "enable_placement_id": 0, 00:20:15.298 "enable_zerocopy_send_server": true, 00:20:15.298 "enable_zerocopy_send_client": false, 00:20:15.298 "zerocopy_threshold": 0, 00:20:15.298 "tls_version": 0, 00:20:15.298 "enable_ktls": false 00:20:15.298 } 00:20:15.298 }, 00:20:15.298 { 00:20:15.298 "method": "sock_impl_set_options", 00:20:15.298 "params": { 00:20:15.298 "impl_name": "posix", 00:20:15.298 "recv_buf_size": 2097152, 00:20:15.298 "send_buf_size": 2097152, 00:20:15.298 "enable_recv_pipe": true, 00:20:15.298 "enable_quickack": false, 00:20:15.298 "enable_placement_id": 0, 00:20:15.298 "enable_zerocopy_send_server": true, 00:20:15.298 "enable_zerocopy_send_client": false, 00:20:15.298 "zerocopy_threshold": 0, 00:20:15.298 "tls_version": 0, 00:20:15.298 "enable_ktls": false 00:20:15.298 } 00:20:15.298 } 00:20:15.298 ] 00:20:15.298 }, 00:20:15.298 { 00:20:15.298 "subsystem": "vmd", 00:20:15.298 "config": [] 00:20:15.298 }, 00:20:15.298 { 00:20:15.298 "subsystem": "accel", 00:20:15.298 "config": [ 00:20:15.298 { 00:20:15.298 "method": "accel_set_options", 00:20:15.298 "params": { 00:20:15.298 "small_cache_size": 128, 00:20:15.298 "large_cache_size": 16, 00:20:15.298 "task_count": 2048, 00:20:15.298 "sequence_count": 2048, 00:20:15.299 "buf_count": 2048 00:20:15.299 } 00:20:15.299 } 00:20:15.299 ] 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "subsystem": "bdev", 00:20:15.299 "config": [ 00:20:15.299 { 00:20:15.299 "method": "bdev_set_options", 00:20:15.299 "params": { 00:20:15.299 "bdev_io_pool_size": 65535, 00:20:15.299 "bdev_io_cache_size": 256, 00:20:15.299 "bdev_auto_examine": true, 00:20:15.299 "iobuf_small_cache_size": 128, 00:20:15.299 "iobuf_large_cache_size": 16 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "bdev_raid_set_options", 00:20:15.299 "params": { 00:20:15.299 "process_window_size_kb": 1024, 00:20:15.299 "process_max_bandwidth_mb_sec": 0 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "bdev_iscsi_set_options", 00:20:15.299 "params": { 00:20:15.299 "timeout_sec": 30 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "bdev_nvme_set_options", 00:20:15.299 "params": { 00:20:15.299 "action_on_timeout": "none", 00:20:15.299 "timeout_us": 0, 00:20:15.299 "timeout_admin_us": 0, 00:20:15.299 "keep_alive_timeout_ms": 10000, 00:20:15.299 "arbitration_burst": 0, 00:20:15.299 "low_priority_weight": 0, 00:20:15.299 "medium_priority_weight": 0, 00:20:15.299 "high_priority_weight": 0, 00:20:15.299 "nvme_adminq_poll_period_us": 10000, 00:20:15.299 "nvme_ioq_poll_period_us": 0, 00:20:15.299 "io_queue_requests": 0, 00:20:15.299 "delay_cmd_submit": true, 00:20:15.299 "transport_retry_count": 4, 00:20:15.299 "bdev_retry_count": 3, 00:20:15.299 "transport_ack_timeout": 0, 00:20:15.299 "ctrlr_loss_timeout_sec": 0, 00:20:15.299 "reconnect_delay_sec": 0, 00:20:15.299 "fast_io_fail_timeout_sec": 0, 00:20:15.299 "disable_auto_failback": false, 00:20:15.299 "generate_uuids": false, 00:20:15.299 "transport_tos": 0, 00:20:15.299 "nvme_error_stat": false, 00:20:15.299 "rdma_srq_size": 0, 00:20:15.299 "io_path_stat": false, 00:20:15.299 "allow_accel_sequence": false, 00:20:15.299 "rdma_max_cq_size": 0, 00:20:15.299 "rdma_cm_event_timeout_ms": 0, 00:20:15.299 "dhchap_digests": [ 00:20:15.299 "sha256", 00:20:15.299 "sha384", 00:20:15.299 "sha512" 00:20:15.299 ], 00:20:15.299 "dhchap_dhgroups": [ 00:20:15.299 "null", 00:20:15.299 "ffdhe2048", 00:20:15.299 "ffdhe3072", 00:20:15.299 "ffdhe4096", 00:20:15.299 "ffdhe6144", 00:20:15.299 "ffdhe8192" 00:20:15.299 ] 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "bdev_nvme_set_hotplug", 00:20:15.299 "params": { 00:20:15.299 "period_us": 100000, 00:20:15.299 "enable": false 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "bdev_malloc_create", 00:20:15.299 "params": { 00:20:15.299 "name": "malloc0", 00:20:15.299 "num_blocks": 8192, 00:20:15.299 "block_size": 4096, 00:20:15.299 "physical_block_size": 4096, 00:20:15.299 "uuid": "a70882d0-26b5-44e9-874a-b87305cd3dd5", 00:20:15.299 "optimal_io_boundary": 0, 00:20:15.299 "md_size": 0, 00:20:15.299 "dif_type": 0, 00:20:15.299 "dif_is_head_of_md": false, 00:20:15.299 "dif_pi_format": 0 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "bdev_wait_for_examine" 00:20:15.299 } 00:20:15.299 ] 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "subsystem": "nbd", 00:20:15.299 "config": [] 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "subsystem": "scheduler", 00:20:15.299 "config": [ 00:20:15.299 { 00:20:15.299 "method": "framework_set_scheduler", 00:20:15.299 "params": { 00:20:15.299 "name": "static" 00:20:15.299 } 00:20:15.299 } 00:20:15.299 ] 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "subsystem": "nvmf", 00:20:15.299 "config": [ 00:20:15.299 { 00:20:15.299 "method": "nvmf_set_config", 00:20:15.299 "params": { 00:20:15.299 "discovery_filter": "match_any", 00:20:15.299 "admin_cmd_passthru": { 00:20:15.299 "identify_ctrlr": false 00:20:15.299 }, 00:20:15.299 "dhchap_digests": [ 00:20:15.299 "sha256", 00:20:15.299 "sha384", 00:20:15.299 "sha512" 00:20:15.299 ], 00:20:15.299 "dhchap_dhgroups": [ 00:20:15.299 "null", 00:20:15.299 "ffdhe2048", 00:20:15.299 "ffdhe3072", 00:20:15.299 "ffdhe4096", 00:20:15.299 "ffdhe6144", 00:20:15.299 "ffdhe8192" 00:20:15.299 ] 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "nvmf_set_max_subsystems", 00:20:15.299 "params": { 00:20:15.299 "max_subsystems": 1024 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "nvmf_set_crdt", 00:20:15.299 "params": { 00:20:15.299 "crdt1": 0, 00:20:15.299 "crdt2": 0, 00:20:15.299 "crdt3": 0 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "nvmf_create_transport", 00:20:15.299 "params": { 00:20:15.299 "trtype": "TCP", 00:20:15.299 "max_queue_depth": 128, 00:20:15.299 "max_io_qpairs_per_ctrlr": 127, 00:20:15.299 "in_capsule_data_size": 4096, 00:20:15.299 "max_io_size": 131072, 00:20:15.299 "io_unit_size": 131072, 00:20:15.299 "max_aq_depth": 128, 00:20:15.299 "num_shared_buffers": 511, 00:20:15.299 "buf_cache_size": 4294967295, 00:20:15.299 "dif_insert_or_strip": false, 00:20:15.299 "zcopy": false, 00:20:15.299 "c2h_success": false, 00:20:15.299 "sock_priority": 0, 00:20:15.299 "abort_timeout_sec": 1, 00:20:15.299 "ack_timeout": 0, 00:20:15.299 "data_wr_pool_size": 0 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "nvmf_create_subsystem", 00:20:15.299 "params": { 00:20:15.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.299 "allow_any_host": false, 00:20:15.299 "serial_number": "00000000000000000000", 00:20:15.299 "model_number": "SPDK bdev Controller", 00:20:15.299 "max_namespaces": 32, 00:20:15.299 "min_cntlid": 1, 00:20:15.299 "max_cntlid": 65519, 00:20:15.299 "ana_reporting": false 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "nvmf_subsystem_add_host", 00:20:15.299 "params": { 00:20:15.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.299 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.299 "psk": "key0" 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "nvmf_subsystem_add_ns", 00:20:15.299 "params": { 00:20:15.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.299 "namespace": { 00:20:15.299 "nsid": 1, 00:20:15.299 "bdev_name": "malloc0", 00:20:15.299 "nguid": "A70882D026B544E9874AB87305CD3DD5", 00:20:15.299 "uuid": "a70882d0-26b5-44e9-874a-b87305cd3dd5", 00:20:15.299 "no_auto_visible": false 00:20:15.299 } 00:20:15.299 } 00:20:15.299 }, 00:20:15.299 { 00:20:15.299 "method": "nvmf_subsystem_add_listener", 00:20:15.299 "params": { 00:20:15.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.299 "listen_address": { 00:20:15.299 "trtype": "TCP", 00:20:15.299 "adrfam": "IPv4", 00:20:15.299 "traddr": "10.0.0.2", 00:20:15.299 "trsvcid": "4420" 00:20:15.299 }, 00:20:15.299 "secure_channel": false, 00:20:15.299 "sock_impl": "ssl" 00:20:15.299 } 00:20:15.299 } 00:20:15.299 ] 00:20:15.299 } 00:20:15.299 ] 00:20:15.299 }' 00:20:15.299 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:15.561 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:15.562 "subsystems": [ 00:20:15.562 { 00:20:15.562 "subsystem": "keyring", 00:20:15.562 "config": [ 00:20:15.562 { 00:20:15.562 "method": "keyring_file_add_key", 00:20:15.562 "params": { 00:20:15.562 "name": "key0", 00:20:15.562 "path": "/tmp/tmp.rFnE70yuWz" 00:20:15.562 } 00:20:15.562 } 00:20:15.562 ] 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "subsystem": "iobuf", 00:20:15.562 "config": [ 00:20:15.562 { 00:20:15.562 "method": "iobuf_set_options", 00:20:15.562 "params": { 00:20:15.562 "small_pool_count": 8192, 00:20:15.562 "large_pool_count": 1024, 00:20:15.562 "small_bufsize": 8192, 00:20:15.562 "large_bufsize": 135168, 00:20:15.562 "enable_numa": false 00:20:15.562 } 00:20:15.562 } 00:20:15.562 ] 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "subsystem": "sock", 00:20:15.562 "config": [ 00:20:15.562 { 00:20:15.562 "method": "sock_set_default_impl", 00:20:15.562 "params": { 00:20:15.562 "impl_name": "posix" 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "sock_impl_set_options", 00:20:15.562 "params": { 00:20:15.562 "impl_name": "ssl", 00:20:15.562 "recv_buf_size": 4096, 00:20:15.562 "send_buf_size": 4096, 00:20:15.562 "enable_recv_pipe": true, 00:20:15.562 "enable_quickack": false, 00:20:15.562 "enable_placement_id": 0, 00:20:15.562 "enable_zerocopy_send_server": true, 00:20:15.562 "enable_zerocopy_send_client": false, 00:20:15.562 "zerocopy_threshold": 0, 00:20:15.562 "tls_version": 0, 00:20:15.562 "enable_ktls": false 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "sock_impl_set_options", 00:20:15.562 "params": { 00:20:15.562 "impl_name": "posix", 00:20:15.562 "recv_buf_size": 2097152, 00:20:15.562 "send_buf_size": 2097152, 00:20:15.562 "enable_recv_pipe": true, 00:20:15.562 "enable_quickack": false, 00:20:15.562 "enable_placement_id": 0, 00:20:15.562 "enable_zerocopy_send_server": true, 00:20:15.562 "enable_zerocopy_send_client": false, 00:20:15.562 "zerocopy_threshold": 0, 00:20:15.562 "tls_version": 0, 00:20:15.562 "enable_ktls": false 00:20:15.562 } 00:20:15.562 } 00:20:15.562 ] 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "subsystem": "vmd", 00:20:15.562 "config": [] 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "subsystem": "accel", 00:20:15.562 "config": [ 00:20:15.562 { 00:20:15.562 "method": "accel_set_options", 00:20:15.562 "params": { 00:20:15.562 "small_cache_size": 128, 00:20:15.562 "large_cache_size": 16, 00:20:15.562 "task_count": 2048, 00:20:15.562 "sequence_count": 2048, 00:20:15.562 "buf_count": 2048 00:20:15.562 } 00:20:15.562 } 00:20:15.562 ] 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "subsystem": "bdev", 00:20:15.562 "config": [ 00:20:15.562 { 00:20:15.562 "method": "bdev_set_options", 00:20:15.562 "params": { 00:20:15.562 "bdev_io_pool_size": 65535, 00:20:15.562 "bdev_io_cache_size": 256, 00:20:15.562 "bdev_auto_examine": true, 00:20:15.562 "iobuf_small_cache_size": 128, 00:20:15.562 "iobuf_large_cache_size": 16 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "bdev_raid_set_options", 00:20:15.562 "params": { 00:20:15.562 "process_window_size_kb": 1024, 00:20:15.562 "process_max_bandwidth_mb_sec": 0 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "bdev_iscsi_set_options", 00:20:15.562 "params": { 00:20:15.562 "timeout_sec": 30 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "bdev_nvme_set_options", 00:20:15.562 "params": { 00:20:15.562 "action_on_timeout": "none", 00:20:15.562 "timeout_us": 0, 00:20:15.562 "timeout_admin_us": 0, 00:20:15.562 "keep_alive_timeout_ms": 10000, 00:20:15.562 "arbitration_burst": 0, 00:20:15.562 "low_priority_weight": 0, 00:20:15.562 "medium_priority_weight": 0, 00:20:15.562 "high_priority_weight": 0, 00:20:15.562 "nvme_adminq_poll_period_us": 10000, 00:20:15.562 "nvme_ioq_poll_period_us": 0, 00:20:15.562 "io_queue_requests": 512, 00:20:15.562 "delay_cmd_submit": true, 00:20:15.562 "transport_retry_count": 4, 00:20:15.562 "bdev_retry_count": 3, 00:20:15.562 "transport_ack_timeout": 0, 00:20:15.562 "ctrlr_loss_timeout_sec": 0, 00:20:15.562 "reconnect_delay_sec": 0, 00:20:15.562 "fast_io_fail_timeout_sec": 0, 00:20:15.562 "disable_auto_failback": false, 00:20:15.562 "generate_uuids": false, 00:20:15.562 "transport_tos": 0, 00:20:15.562 "nvme_error_stat": false, 00:20:15.562 "rdma_srq_size": 0, 00:20:15.562 "io_path_stat": false, 00:20:15.562 "allow_accel_sequence": false, 00:20:15.562 "rdma_max_cq_size": 0, 00:20:15.562 "rdma_cm_event_timeout_ms": 0, 00:20:15.562 "dhchap_digests": [ 00:20:15.562 "sha256", 00:20:15.562 "sha384", 00:20:15.562 "sha512" 00:20:15.562 ], 00:20:15.562 "dhchap_dhgroups": [ 00:20:15.562 "null", 00:20:15.562 "ffdhe2048", 00:20:15.562 "ffdhe3072", 00:20:15.562 "ffdhe4096", 00:20:15.562 "ffdhe6144", 00:20:15.562 "ffdhe8192" 00:20:15.562 ] 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "bdev_nvme_attach_controller", 00:20:15.562 "params": { 00:20:15.562 "name": "nvme0", 00:20:15.562 "trtype": "TCP", 00:20:15.562 "adrfam": "IPv4", 00:20:15.562 "traddr": "10.0.0.2", 00:20:15.562 "trsvcid": "4420", 00:20:15.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.562 "prchk_reftag": false, 00:20:15.562 "prchk_guard": false, 00:20:15.562 "ctrlr_loss_timeout_sec": 0, 00:20:15.562 "reconnect_delay_sec": 0, 00:20:15.562 "fast_io_fail_timeout_sec": 0, 00:20:15.562 "psk": "key0", 00:20:15.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.562 "hdgst": false, 00:20:15.562 "ddgst": false, 00:20:15.562 "multipath": "multipath" 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "bdev_nvme_set_hotplug", 00:20:15.562 "params": { 00:20:15.562 "period_us": 100000, 00:20:15.562 "enable": false 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "bdev_enable_histogram", 00:20:15.562 "params": { 00:20:15.562 "name": "nvme0n1", 00:20:15.562 "enable": true 00:20:15.562 } 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "method": "bdev_wait_for_examine" 00:20:15.562 } 00:20:15.562 ] 00:20:15.562 }, 00:20:15.562 { 00:20:15.562 "subsystem": "nbd", 00:20:15.562 "config": [] 00:20:15.562 } 00:20:15.562 ] 00:20:15.562 }' 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3280012 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3280012 ']' 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3280012 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3280012 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3280012' 00:20:15.562 killing process with pid 3280012 00:20:15.562 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3280012 00:20:15.562 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.562 00:20:15.562 Latency(us) 00:20:15.562 [2024-11-06T10:02:06.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.562 [2024-11-06T10:02:06.984Z] =================================================================================================================== 00:20:15.562 [2024-11-06T10:02:06.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.563 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3280012 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3279854 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3279854 ']' 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3279854 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3279854 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3279854' 00:20:15.825 killing process with pid 3279854 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3279854 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3279854 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.825 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:15.825 "subsystems": [ 00:20:15.825 { 00:20:15.825 "subsystem": "keyring", 00:20:15.825 "config": [ 00:20:15.825 { 00:20:15.825 "method": "keyring_file_add_key", 00:20:15.825 "params": { 00:20:15.825 "name": "key0", 00:20:15.825 "path": "/tmp/tmp.rFnE70yuWz" 00:20:15.825 } 00:20:15.825 } 00:20:15.825 ] 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "subsystem": "iobuf", 00:20:15.825 "config": [ 00:20:15.825 { 00:20:15.825 "method": "iobuf_set_options", 00:20:15.825 "params": { 00:20:15.825 "small_pool_count": 8192, 00:20:15.825 "large_pool_count": 1024, 00:20:15.825 "small_bufsize": 8192, 00:20:15.825 "large_bufsize": 135168, 00:20:15.825 "enable_numa": false 00:20:15.825 } 00:20:15.825 } 00:20:15.825 ] 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "subsystem": "sock", 00:20:15.825 "config": [ 00:20:15.825 { 00:20:15.825 "method": "sock_set_default_impl", 00:20:15.825 "params": { 00:20:15.825 "impl_name": "posix" 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "sock_impl_set_options", 00:20:15.825 "params": { 00:20:15.825 "impl_name": "ssl", 00:20:15.825 "recv_buf_size": 4096, 00:20:15.825 "send_buf_size": 4096, 00:20:15.825 "enable_recv_pipe": true, 00:20:15.825 "enable_quickack": false, 00:20:15.825 "enable_placement_id": 0, 00:20:15.825 "enable_zerocopy_send_server": true, 00:20:15.825 "enable_zerocopy_send_client": false, 00:20:15.825 "zerocopy_threshold": 0, 00:20:15.825 "tls_version": 0, 00:20:15.825 "enable_ktls": false 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "sock_impl_set_options", 00:20:15.825 "params": { 00:20:15.825 "impl_name": "posix", 00:20:15.825 "recv_buf_size": 2097152, 00:20:15.825 "send_buf_size": 2097152, 00:20:15.825 "enable_recv_pipe": true, 00:20:15.825 "enable_quickack": false, 00:20:15.825 "enable_placement_id": 0, 00:20:15.825 "enable_zerocopy_send_server": true, 00:20:15.825 "enable_zerocopy_send_client": false, 00:20:15.825 "zerocopy_threshold": 0, 00:20:15.825 "tls_version": 0, 00:20:15.825 "enable_ktls": false 00:20:15.825 } 00:20:15.825 } 00:20:15.825 ] 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "subsystem": "vmd", 00:20:15.825 "config": [] 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "subsystem": "accel", 00:20:15.825 "config": [ 00:20:15.825 { 00:20:15.825 "method": "accel_set_options", 00:20:15.825 "params": { 00:20:15.825 "small_cache_size": 128, 00:20:15.825 "large_cache_size": 16, 00:20:15.825 "task_count": 2048, 00:20:15.825 "sequence_count": 2048, 00:20:15.825 "buf_count": 2048 00:20:15.825 } 00:20:15.825 } 00:20:15.825 ] 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "subsystem": "bdev", 00:20:15.825 "config": [ 00:20:15.825 { 00:20:15.825 "method": "bdev_set_options", 00:20:15.825 "params": { 00:20:15.825 "bdev_io_pool_size": 65535, 00:20:15.825 "bdev_io_cache_size": 256, 00:20:15.825 "bdev_auto_examine": true, 00:20:15.825 "iobuf_small_cache_size": 128, 00:20:15.825 "iobuf_large_cache_size": 16 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "bdev_raid_set_options", 00:20:15.825 "params": { 00:20:15.825 "process_window_size_kb": 1024, 00:20:15.825 "process_max_bandwidth_mb_sec": 0 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "bdev_iscsi_set_options", 00:20:15.825 "params": { 00:20:15.825 "timeout_sec": 30 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "bdev_nvme_set_options", 00:20:15.825 "params": { 00:20:15.825 "action_on_timeout": "none", 00:20:15.825 "timeout_us": 0, 00:20:15.825 "timeout_admin_us": 0, 00:20:15.825 "keep_alive_timeout_ms": 10000, 00:20:15.825 "arbitration_burst": 0, 00:20:15.825 "low_priority_weight": 0, 00:20:15.825 "medium_priority_weight": 0, 00:20:15.825 "high_priority_weight": 0, 00:20:15.825 "nvme_adminq_poll_period_us": 10000, 00:20:15.825 "nvme_ioq_poll_period_us": 0, 00:20:15.825 "io_queue_requests": 0, 00:20:15.825 "delay_cmd_submit": true, 00:20:15.825 "transport_retry_count": 4, 00:20:15.825 "bdev_retry_count": 3, 00:20:15.825 "transport_ack_timeout": 0, 00:20:15.825 "ctrlr_loss_timeout_sec": 0, 00:20:15.825 "reconnect_delay_sec": 0, 00:20:15.825 "fast_io_fail_timeout_sec": 0, 00:20:15.825 "disable_auto_failback": false, 00:20:15.825 "generate_uuids": false, 00:20:15.825 "transport_tos": 0, 00:20:15.825 "nvme_error_stat": false, 00:20:15.825 "rdma_srq_size": 0, 00:20:15.825 "io_path_stat": false, 00:20:15.825 "allow_accel_sequence": false, 00:20:15.825 "rdma_max_cq_size": 0, 00:20:15.825 "rdma_cm_event_timeout_ms": 0, 00:20:15.825 "dhchap_digests": [ 00:20:15.825 "sha256", 00:20:15.825 "sha384", 00:20:15.825 "sha512" 00:20:15.825 ], 00:20:15.825 "dhchap_dhgroups": [ 00:20:15.825 "null", 00:20:15.825 "ffdhe2048", 00:20:15.825 "ffdhe3072", 00:20:15.825 "ffdhe4096", 00:20:15.825 "ffdhe6144", 00:20:15.825 "ffdhe8192" 00:20:15.825 ] 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "bdev_nvme_set_hotplug", 00:20:15.825 "params": { 00:20:15.825 "period_us": 100000, 00:20:15.825 "enable": false 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "bdev_malloc_create", 00:20:15.825 "params": { 00:20:15.825 "name": "malloc0", 00:20:15.825 "num_blocks": 8192, 00:20:15.825 "block_size": 4096, 00:20:15.825 "physical_block_size": 4096, 00:20:15.825 "uuid": "a70882d0-26b5-44e9-874a-b87305cd3dd5", 00:20:15.825 "optimal_io_boundary": 0, 00:20:15.825 "md_size": 0, 00:20:15.825 "dif_type": 0, 00:20:15.825 "dif_is_head_of_md": false, 00:20:15.825 "dif_pi_format": 0 00:20:15.825 } 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "method": "bdev_wait_for_examine" 00:20:15.825 } 00:20:15.825 ] 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "subsystem": "nbd", 00:20:15.825 "config": [] 00:20:15.825 }, 00:20:15.825 { 00:20:15.825 "subsystem": "scheduler", 00:20:15.826 "config": [ 00:20:15.826 { 00:20:15.826 "method": "framework_set_scheduler", 00:20:15.826 "params": { 00:20:15.826 "name": "static" 00:20:15.826 } 00:20:15.826 } 00:20:15.826 ] 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "subsystem": "nvmf", 00:20:15.826 "config": [ 00:20:15.826 { 00:20:15.826 "method": "nvmf_set_config", 00:20:15.826 "params": { 00:20:15.826 "discovery_filter": "match_any", 00:20:15.826 "admin_cmd_passthru": { 00:20:15.826 "identify_ctrlr": false 00:20:15.826 }, 00:20:15.826 "dhchap_digests": [ 00:20:15.826 "sha256", 00:20:15.826 "sha384", 00:20:15.826 "sha512" 00:20:15.826 ], 00:20:15.826 "dhchap_dhgroups": [ 00:20:15.826 "null", 00:20:15.826 "ffdhe2048", 00:20:15.826 "ffdhe3072", 00:20:15.826 "ffdhe4096", 00:20:15.826 "ffdhe6144", 00:20:15.826 "ffdhe8192" 00:20:15.826 ] 00:20:15.826 } 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "method": "nvmf_set_max_subsystems", 00:20:15.826 "params": { 00:20:15.826 "max_subsystems": 1024 00:20:15.826 } 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "method": "nvmf_set_crdt", 00:20:15.826 "params": { 00:20:15.826 "crdt1": 0, 00:20:15.826 "crdt2": 0, 00:20:15.826 "crdt3": 0 00:20:15.826 } 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "method": "nvmf_create_transport", 00:20:15.826 "params": { 00:20:15.826 "trtype": "TCP", 00:20:15.826 "max_queue_depth": 128, 00:20:15.826 "max_io_qpairs_per_ctrlr": 127, 00:20:15.826 "in_capsule_data_size": 4096, 00:20:15.826 "max_io_size": 131072, 00:20:15.826 "io_unit_size": 131072, 00:20:15.826 "max_aq_depth": 128, 00:20:15.826 "num_shared_buffers": 511, 00:20:15.826 "buf_cache_size": 4294967295, 00:20:15.826 "dif_insert_or_strip": false, 00:20:15.826 "zcopy": false, 00:20:15.826 "c2h_success": false, 00:20:15.826 "sock_priority": 0, 00:20:15.826 "abort_timeout_sec": 1, 00:20:15.826 "ack_timeout": 0, 00:20:15.826 "data_wr_pool_size": 0 00:20:15.826 } 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "method": "nvmf_create_subsystem", 00:20:15.826 "params": { 00:20:15.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.826 "allow_any_host": false, 00:20:15.826 "serial_number": "00000000000000000000", 00:20:15.826 "model_number": "SPDK bdev Controller", 00:20:15.826 "max_namespaces": 32, 00:20:15.826 "min_cntlid": 1, 00:20:15.826 "max_cntlid": 65519, 00:20:15.826 "ana_reporting": false 00:20:15.826 } 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "method": "nvmf_subsystem_add_host", 00:20:15.826 "params": { 00:20:15.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.826 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.826 "psk": "key0" 00:20:15.826 } 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "method": "nvmf_subsystem_add_ns", 00:20:15.826 "params": { 00:20:15.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.826 "namespace": { 00:20:15.826 "nsid": 1, 00:20:15.826 "bdev_name": "malloc0", 00:20:15.826 "nguid": "A70882D026B544E9874AB87305CD3DD5", 00:20:15.826 "uuid": "a70882d0-26b5-44e9-874a-b87305cd3dd5", 00:20:15.826 "no_auto_visible": false 00:20:15.826 } 00:20:15.826 } 00:20:15.826 }, 00:20:15.826 { 00:20:15.826 "method": "nvmf_subsystem_add_listener", 00:20:15.826 "params": { 00:20:15.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.826 "listen_address": { 00:20:15.826 "trtype": "TCP", 00:20:15.826 "adrfam": "IPv4", 00:20:15.826 "traddr": "10.0.0.2", 00:20:15.826 "trsvcid": "4420" 00:20:15.826 }, 00:20:15.826 "secure_channel": false, 00:20:15.826 "sock_impl": "ssl" 00:20:15.826 } 00:20:15.826 } 00:20:15.826 ] 00:20:15.826 } 00:20:15.826 ] 00:20:15.826 }' 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3280696 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3280696 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3280696 ']' 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.826 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.088 [2024-11-06 11:02:07.273912] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:16.088 [2024-11-06 11:02:07.273968] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.088 [2024-11-06 11:02:07.348988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.088 [2024-11-06 11:02:07.383124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.088 [2024-11-06 11:02:07.383158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.088 [2024-11-06 11:02:07.383166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.088 [2024-11-06 11:02:07.383173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.088 [2024-11-06 11:02:07.383179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.088 [2024-11-06 11:02:07.383762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.349 [2024-11-06 11:02:07.582705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.349 [2024-11-06 11:02:07.614721] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.349 [2024-11-06 11:02:07.614970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3280796 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3280796 /var/tmp/bdevperf.sock 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3280796 ']' 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.923 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:16.923 "subsystems": [ 00:20:16.923 { 00:20:16.923 "subsystem": "keyring", 00:20:16.923 "config": [ 00:20:16.923 { 00:20:16.923 "method": "keyring_file_add_key", 00:20:16.923 "params": { 00:20:16.923 "name": "key0", 00:20:16.923 "path": "/tmp/tmp.rFnE70yuWz" 00:20:16.923 } 00:20:16.923 } 00:20:16.923 ] 00:20:16.923 }, 00:20:16.923 { 00:20:16.923 "subsystem": "iobuf", 00:20:16.923 "config": [ 00:20:16.923 { 00:20:16.923 "method": "iobuf_set_options", 00:20:16.923 "params": { 00:20:16.923 "small_pool_count": 8192, 00:20:16.923 "large_pool_count": 1024, 00:20:16.923 "small_bufsize": 8192, 00:20:16.923 "large_bufsize": 135168, 00:20:16.923 "enable_numa": false 00:20:16.923 } 00:20:16.923 } 00:20:16.923 ] 00:20:16.923 }, 00:20:16.923 { 00:20:16.923 "subsystem": "sock", 00:20:16.923 "config": [ 00:20:16.923 { 00:20:16.923 "method": "sock_set_default_impl", 00:20:16.923 "params": { 00:20:16.923 "impl_name": "posix" 00:20:16.923 } 00:20:16.923 }, 00:20:16.923 { 00:20:16.923 "method": "sock_impl_set_options", 00:20:16.923 "params": { 00:20:16.923 "impl_name": "ssl", 00:20:16.923 "recv_buf_size": 4096, 00:20:16.923 "send_buf_size": 4096, 00:20:16.923 "enable_recv_pipe": true, 00:20:16.923 "enable_quickack": false, 00:20:16.923 "enable_placement_id": 0, 00:20:16.923 "enable_zerocopy_send_server": true, 00:20:16.923 "enable_zerocopy_send_client": false, 00:20:16.923 "zerocopy_threshold": 0, 00:20:16.923 "tls_version": 0, 00:20:16.923 "enable_ktls": false 00:20:16.923 } 00:20:16.923 }, 00:20:16.923 { 00:20:16.923 "method": "sock_impl_set_options", 00:20:16.923 "params": { 00:20:16.923 "impl_name": "posix", 00:20:16.923 "recv_buf_size": 2097152, 00:20:16.923 "send_buf_size": 2097152, 00:20:16.923 "enable_recv_pipe": true, 00:20:16.923 "enable_quickack": false, 00:20:16.923 "enable_placement_id": 0, 00:20:16.923 "enable_zerocopy_send_server": true, 00:20:16.923 "enable_zerocopy_send_client": false, 00:20:16.923 "zerocopy_threshold": 0, 00:20:16.923 "tls_version": 0, 00:20:16.923 "enable_ktls": false 00:20:16.923 } 00:20:16.923 } 00:20:16.923 ] 00:20:16.923 }, 00:20:16.923 { 00:20:16.923 "subsystem": "vmd", 00:20:16.923 "config": [] 00:20:16.923 }, 00:20:16.923 { 00:20:16.923 "subsystem": "accel", 00:20:16.923 "config": [ 00:20:16.923 { 00:20:16.923 "method": "accel_set_options", 00:20:16.923 "params": { 00:20:16.923 "small_cache_size": 128, 00:20:16.923 "large_cache_size": 16, 00:20:16.923 "task_count": 2048, 00:20:16.923 "sequence_count": 2048, 00:20:16.923 "buf_count": 2048 00:20:16.923 } 00:20:16.923 } 00:20:16.923 ] 00:20:16.923 }, 00:20:16.923 { 00:20:16.923 "subsystem": "bdev", 00:20:16.923 "config": [ 00:20:16.923 { 00:20:16.923 "method": "bdev_set_options", 00:20:16.924 "params": { 00:20:16.924 "bdev_io_pool_size": 65535, 00:20:16.924 "bdev_io_cache_size": 256, 00:20:16.924 "bdev_auto_examine": true, 00:20:16.924 "iobuf_small_cache_size": 128, 00:20:16.924 "iobuf_large_cache_size": 16 00:20:16.924 } 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "method": "bdev_raid_set_options", 00:20:16.924 "params": { 00:20:16.924 "process_window_size_kb": 1024, 00:20:16.924 "process_max_bandwidth_mb_sec": 0 00:20:16.924 } 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "method": "bdev_iscsi_set_options", 00:20:16.924 "params": { 00:20:16.924 "timeout_sec": 30 00:20:16.924 } 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "method": "bdev_nvme_set_options", 00:20:16.924 "params": { 00:20:16.924 "action_on_timeout": "none", 00:20:16.924 "timeout_us": 0, 00:20:16.924 "timeout_admin_us": 0, 00:20:16.924 "keep_alive_timeout_ms": 10000, 00:20:16.924 "arbitration_burst": 0, 00:20:16.924 "low_priority_weight": 0, 00:20:16.924 "medium_priority_weight": 0, 00:20:16.924 "high_priority_weight": 0, 00:20:16.924 "nvme_adminq_poll_period_us": 10000, 00:20:16.924 "nvme_ioq_poll_period_us": 0, 00:20:16.924 "io_queue_requests": 512, 00:20:16.924 "delay_cmd_submit": true, 00:20:16.924 "transport_retry_count": 4, 00:20:16.924 "bdev_retry_count": 3, 00:20:16.924 "transport_ack_timeout": 0, 00:20:16.924 "ctrlr_loss_timeout_sec": 0, 00:20:16.924 "reconnect_delay_sec": 0, 00:20:16.924 "fast_io_fail_timeout_sec": 0, 00:20:16.924 "disable_auto_failback": false, 00:20:16.924 "generate_uuids": false, 00:20:16.924 "transport_tos": 0, 00:20:16.924 "nvme_error_stat": false, 00:20:16.924 "rdma_srq_size": 0, 00:20:16.924 "io_path_stat": false, 00:20:16.924 "allow_accel_sequence": false, 00:20:16.924 "rdma_max_cq_size": 0, 00:20:16.924 "rdma_cm_event_timeout_ms": 0, 00:20:16.924 "dhchap_digests": [ 00:20:16.924 "sha256", 00:20:16.924 "sha384", 00:20:16.924 "sha512" 00:20:16.924 ], 00:20:16.924 "dhchap_dhgroups": [ 00:20:16.924 "null", 00:20:16.924 "ffdhe2048", 00:20:16.924 "ffdhe3072", 00:20:16.924 "ffdhe4096", 00:20:16.924 "ffdhe6144", 00:20:16.924 "ffdhe8192" 00:20:16.924 ] 00:20:16.924 } 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "method": "bdev_nvme_attach_controller", 00:20:16.924 "params": { 00:20:16.924 "name": "nvme0", 00:20:16.924 "trtype": "TCP", 00:20:16.924 "adrfam": "IPv4", 00:20:16.924 "traddr": "10.0.0.2", 00:20:16.924 "trsvcid": "4420", 00:20:16.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.924 "prchk_reftag": false, 00:20:16.924 "prchk_guard": false, 00:20:16.924 "ctrlr_loss_timeout_sec": 0, 00:20:16.924 "reconnect_delay_sec": 0, 00:20:16.924 "fast_io_fail_timeout_sec": 0, 00:20:16.924 "psk": "key0", 00:20:16.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.924 "hdgst": false, 00:20:16.924 "ddgst": false, 00:20:16.924 "multipath": "multipath" 00:20:16.924 } 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "method": "bdev_nvme_set_hotplug", 00:20:16.924 "params": { 00:20:16.924 "period_us": 100000, 00:20:16.924 "enable": false 00:20:16.924 } 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "method": "bdev_enable_histogram", 00:20:16.924 "params": { 00:20:16.924 "name": "nvme0n1", 00:20:16.924 "enable": true 00:20:16.924 } 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "method": "bdev_wait_for_examine" 00:20:16.924 } 00:20:16.924 ] 00:20:16.924 }, 00:20:16.924 { 00:20:16.924 "subsystem": "nbd", 00:20:16.924 "config": [] 00:20:16.924 } 00:20:16.924 ] 00:20:16.924 }' 00:20:16.924 [2024-11-06 11:02:08.146906] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:16.924 [2024-11-06 11:02:08.146961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280796 ] 00:20:16.924 [2024-11-06 11:02:08.231440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.924 [2024-11-06 11:02:08.261501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.185 [2024-11-06 11:02:08.396615] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.758 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.758 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:17.758 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:17.758 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:17.758 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.758 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.019 Running I/O for 1 seconds... 00:20:18.964 4857.00 IOPS, 18.97 MiB/s 00:20:18.964 Latency(us) 00:20:18.964 [2024-11-06T10:02:10.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.964 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:18.964 Verification LBA range: start 0x0 length 0x2000 00:20:18.964 nvme0n1 : 1.02 4906.70 19.17 0.00 0.00 25884.75 4696.75 34078.72 00:20:18.964 [2024-11-06T10:02:10.386Z] =================================================================================================================== 00:20:18.964 [2024-11-06T10:02:10.386Z] Total : 4906.70 19.17 0.00 0.00 25884.75 4696.75 34078.72 00:20:18.964 { 00:20:18.964 "results": [ 00:20:18.964 { 00:20:18.964 "job": "nvme0n1", 00:20:18.964 "core_mask": "0x2", 00:20:18.964 "workload": "verify", 00:20:18.964 "status": "finished", 00:20:18.964 "verify_range": { 00:20:18.964 "start": 0, 00:20:18.964 "length": 8192 00:20:18.964 }, 00:20:18.964 "queue_depth": 128, 00:20:18.964 "io_size": 4096, 00:20:18.964 "runtime": 1.015958, 00:20:18.964 "iops": 4906.698898970233, 00:20:18.964 "mibps": 19.166792574102473, 00:20:18.964 "io_failed": 0, 00:20:18.964 "io_timeout": 0, 00:20:18.964 "avg_latency_us": 25884.75348177867, 00:20:18.964 "min_latency_us": 4696.746666666667, 00:20:18.964 "max_latency_us": 34078.72 00:20:18.964 } 00:20:18.964 ], 00:20:18.964 "core_count": 1 00:20:18.964 } 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:18.964 nvmf_trace.0 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3280796 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3280796 ']' 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3280796 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:18.964 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3280796 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3280796' 00:20:19.225 killing process with pid 3280796 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3280796 00:20:19.225 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.225 00:20:19.225 Latency(us) 00:20:19.225 [2024-11-06T10:02:10.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.225 [2024-11-06T10:02:10.647Z] =================================================================================================================== 00:20:19.225 [2024-11-06T10:02:10.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3280796 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.225 rmmod nvme_tcp 00:20:19.225 rmmod nvme_fabrics 00:20:19.225 rmmod nvme_keyring 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3280696 ']' 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3280696 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3280696 ']' 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3280696 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3280696 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3280696' 00:20:19.225 killing process with pid 3280696 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3280696 00:20:19.225 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3280696 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.487 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.403 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qURjSD0Vy7 /tmp/tmp.zyRMkz6vkV /tmp/tmp.rFnE70yuWz 00:20:21.665 00:20:21.665 real 1m22.802s 00:20:21.665 user 2m8.501s 00:20:21.665 sys 0m26.309s 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 ************************************ 00:20:21.665 END TEST nvmf_tls 00:20:21.665 ************************************ 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 ************************************ 00:20:21.665 START TEST nvmf_fips 00:20:21.665 ************************************ 00:20:21.665 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:21.665 * Looking for test storage... 00:20:21.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:21.665 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:21.665 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:21.665 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:21.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.929 --rc genhtml_branch_coverage=1 00:20:21.929 --rc genhtml_function_coverage=1 00:20:21.929 --rc genhtml_legend=1 00:20:21.929 --rc geninfo_all_blocks=1 00:20:21.929 --rc geninfo_unexecuted_blocks=1 00:20:21.929 00:20:21.929 ' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:21.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.929 --rc genhtml_branch_coverage=1 00:20:21.929 --rc genhtml_function_coverage=1 00:20:21.929 --rc genhtml_legend=1 00:20:21.929 --rc geninfo_all_blocks=1 00:20:21.929 --rc geninfo_unexecuted_blocks=1 00:20:21.929 00:20:21.929 ' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:21.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.929 --rc genhtml_branch_coverage=1 00:20:21.929 --rc genhtml_function_coverage=1 00:20:21.929 --rc genhtml_legend=1 00:20:21.929 --rc geninfo_all_blocks=1 00:20:21.929 --rc geninfo_unexecuted_blocks=1 00:20:21.929 00:20:21.929 ' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:21.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.929 --rc genhtml_branch_coverage=1 00:20:21.929 --rc genhtml_function_coverage=1 00:20:21.929 --rc genhtml_legend=1 00:20:21.929 --rc geninfo_all_blocks=1 00:20:21.929 --rc geninfo_unexecuted_blocks=1 00:20:21.929 00:20:21.929 ' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.929 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:21.930 Error setting digest 00:20:21.930 404253B5647F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:21.930 404253B5647F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:21.930 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.931 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:30.081 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:30.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:30.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:30.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:30.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:30.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:20:30.082 00:20:30.082 --- 10.0.0.2 ping statistics --- 00:20:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.082 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:20:30.082 00:20:30.082 --- 10.0.0.1 ping statistics --- 00:20:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.082 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3285622 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3285622 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.082 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3285622 ']' 00:20:30.083 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.083 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.083 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.083 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.083 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.083 [2024-11-06 11:02:20.833589] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:30.083 [2024-11-06 11:02:20.833670] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.083 [2024-11-06 11:02:20.906038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.083 [2024-11-06 11:02:20.950639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.083 [2024-11-06 11:02:20.950685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.083 [2024-11-06 11:02:20.950692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.083 [2024-11-06 11:02:20.950697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.083 [2024-11-06 11:02:20.950702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.083 [2024-11-06 11:02:20.951325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.vZT 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.vZT 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.vZT 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.vZT 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.083 [2024-11-06 11:02:21.276832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.083 [2024-11-06 11:02:21.292838] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.083 [2024-11-06 11:02:21.293123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.083 malloc0 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3285774 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3285774 /var/tmp/bdevperf.sock 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3285774 ']' 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.083 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.083 [2024-11-06 11:02:21.423338] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:30.083 [2024-11-06 11:02:21.423411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285774 ] 00:20:30.083 [2024-11-06 11:02:21.488846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.345 [2024-11-06 11:02:21.524985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.916 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.916 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:30.916 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.vZT 00:20:31.178 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:31.178 [2024-11-06 11:02:22.552221] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.439 TLSTESTn1 00:20:31.439 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.439 Running I/O for 10 seconds... 00:20:33.763 6064.00 IOPS, 23.69 MiB/s [2024-11-06T10:02:25.757Z] 5871.00 IOPS, 22.93 MiB/s [2024-11-06T10:02:27.140Z] 5729.33 IOPS, 22.38 MiB/s [2024-11-06T10:02:28.082Z] 5706.75 IOPS, 22.29 MiB/s [2024-11-06T10:02:29.022Z] 5885.20 IOPS, 22.99 MiB/s [2024-11-06T10:02:29.963Z] 5803.50 IOPS, 22.67 MiB/s [2024-11-06T10:02:30.904Z] 5881.00 IOPS, 22.97 MiB/s [2024-11-06T10:02:31.848Z] 5939.38 IOPS, 23.20 MiB/s [2024-11-06T10:02:32.789Z] 5959.00 IOPS, 23.28 MiB/s [2024-11-06T10:02:32.789Z] 5859.50 IOPS, 22.89 MiB/s 00:20:41.367 Latency(us) 00:20:41.367 [2024-11-06T10:02:32.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.367 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:41.367 Verification LBA range: start 0x0 length 0x2000 00:20:41.367 TLSTESTn1 : 10.01 5864.61 22.91 0.00 0.00 21794.94 5406.72 23483.73 00:20:41.367 [2024-11-06T10:02:32.789Z] =================================================================================================================== 00:20:41.367 [2024-11-06T10:02:32.789Z] Total : 5864.61 22.91 0.00 0.00 21794.94 5406.72 23483.73 00:20:41.367 { 00:20:41.367 "results": [ 00:20:41.367 { 00:20:41.367 "job": "TLSTESTn1", 00:20:41.367 "core_mask": "0x4", 00:20:41.367 "workload": "verify", 00:20:41.367 "status": "finished", 00:20:41.367 "verify_range": { 00:20:41.367 "start": 0, 00:20:41.367 "length": 8192 00:20:41.367 }, 00:20:41.367 "queue_depth": 128, 00:20:41.367 "io_size": 4096, 00:20:41.367 "runtime": 10.012936, 00:20:41.367 "iops": 5864.613535929921, 00:20:41.367 "mibps": 22.908646624726256, 00:20:41.367 "io_failed": 0, 00:20:41.367 "io_timeout": 0, 00:20:41.367 "avg_latency_us": 21794.935369594587, 00:20:41.367 "min_latency_us": 5406.72, 00:20:41.367 "max_latency_us": 23483.733333333334 00:20:41.367 } 00:20:41.367 ], 00:20:41.367 "core_count": 1 00:20:41.367 } 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:41.629 nvmf_trace.0 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3285774 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3285774 ']' 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3285774 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.629 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3285774 00:20:41.630 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:41.630 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:41.630 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3285774' 00:20:41.630 killing process with pid 3285774 00:20:41.630 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3285774 00:20:41.630 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.630 00:20:41.630 Latency(us) 00:20:41.630 [2024-11-06T10:02:33.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.630 [2024-11-06T10:02:33.052Z] =================================================================================================================== 00:20:41.630 [2024-11-06T10:02:33.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.630 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3285774 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.891 rmmod nvme_tcp 00:20:41.891 rmmod nvme_fabrics 00:20:41.891 rmmod nvme_keyring 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3285622 ']' 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3285622 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3285622 ']' 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3285622 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3285622 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3285622' 00:20:41.891 killing process with pid 3285622 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3285622 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3285622 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:41.891 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.892 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.892 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:42.152 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.152 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:42.152 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.152 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.152 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.vZT 00:20:44.067 00:20:44.067 real 0m22.470s 00:20:44.067 user 0m24.327s 00:20:44.067 sys 0m9.290s 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.067 ************************************ 00:20:44.067 END TEST nvmf_fips 00:20:44.067 ************************************ 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.067 ************************************ 00:20:44.067 START TEST nvmf_control_msg_list 00:20:44.067 ************************************ 00:20:44.067 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:44.329 * Looking for test storage... 00:20:44.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.329 --rc genhtml_branch_coverage=1 00:20:44.329 --rc genhtml_function_coverage=1 00:20:44.329 --rc genhtml_legend=1 00:20:44.329 --rc geninfo_all_blocks=1 00:20:44.329 --rc geninfo_unexecuted_blocks=1 00:20:44.329 00:20:44.329 ' 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.329 --rc genhtml_branch_coverage=1 00:20:44.329 --rc genhtml_function_coverage=1 00:20:44.329 --rc genhtml_legend=1 00:20:44.329 --rc geninfo_all_blocks=1 00:20:44.329 --rc geninfo_unexecuted_blocks=1 00:20:44.329 00:20:44.329 ' 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.329 --rc genhtml_branch_coverage=1 00:20:44.329 --rc genhtml_function_coverage=1 00:20:44.329 --rc genhtml_legend=1 00:20:44.329 --rc geninfo_all_blocks=1 00:20:44.329 --rc geninfo_unexecuted_blocks=1 00:20:44.329 00:20:44.329 ' 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.329 --rc genhtml_branch_coverage=1 00:20:44.329 --rc genhtml_function_coverage=1 00:20:44.329 --rc genhtml_legend=1 00:20:44.329 --rc geninfo_all_blocks=1 00:20:44.329 --rc geninfo_unexecuted_blocks=1 00:20:44.329 00:20:44.329 ' 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.329 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.330 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.479 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:52.480 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:52.480 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:52.480 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:52.480 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:20:52.480 00:20:52.480 --- 10.0.0.2 ping statistics --- 00:20:52.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.480 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:20:52.480 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:20:52.480 00:20:52.480 --- 10.0.0.1 ping statistics --- 00:20:52.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.480 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:20:52.480 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.480 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:52.480 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.480 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.480 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3292126 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3292126 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3292126 ']' 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:52.481 [2024-11-06 11:02:43.119232] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:20:52.481 [2024-11-06 11:02:43.119326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.481 [2024-11-06 11:02:43.202219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.481 [2024-11-06 11:02:43.243657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.481 [2024-11-06 11:02:43.243694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.481 [2024-11-06 11:02:43.243702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.481 [2024-11-06 11:02:43.243709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.481 [2024-11-06 11:02:43.243715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.481 [2024-11-06 11:02:43.244308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.481 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.742 [2024-11-06 11:02:43.942842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.742 Malloc0 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.742 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.743 [2024-11-06 11:02:43.977624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3292464 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3292465 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3292466 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3292464 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.743 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.743 [2024-11-06 11:02:44.056047] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:52.743 [2024-11-06 11:02:44.066054] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:52.743 [2024-11-06 11:02:44.075902] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:54.189 Initializing NVMe Controllers 00:20:54.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:54.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:54.189 Initialization complete. Launching workers. 00:20:54.189 ======================================================== 00:20:54.189 Latency(us) 00:20:54.189 Device Information : IOPS MiB/s Average min max 00:20:54.189 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40920.87 40799.99 41391.94 00:20:54.189 ======================================================== 00:20:54.189 Total : 25.00 0.10 40920.87 40799.99 41391.94 00:20:54.189 00:20:54.189 Initializing NVMe Controllers 00:20:54.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:54.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:54.189 Initialization complete. Launching workers. 00:20:54.189 ======================================================== 00:20:54.189 Latency(us) 00:20:54.189 Device Information : IOPS MiB/s Average min max 00:20:54.189 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1552.00 6.06 644.42 159.42 821.78 00:20:54.189 ======================================================== 00:20:54.189 Total : 1552.00 6.06 644.42 159.42 821.78 00:20:54.189 00:20:54.189 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3292465 00:20:54.189 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3292466 00:20:54.189 Initializing NVMe Controllers 00:20:54.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:54.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:54.189 Initialization complete. Launching workers. 00:20:54.189 ======================================================== 00:20:54.189 Latency(us) 00:20:54.189 Device Information : IOPS MiB/s Average min max 00:20:54.190 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40913.14 40822.63 41088.82 00:20:54.190 ======================================================== 00:20:54.190 Total : 25.00 0.10 40913.14 40822.63 41088.82 00:20:54.190 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.190 rmmod nvme_tcp 00:20:54.190 rmmod nvme_fabrics 00:20:54.190 rmmod nvme_keyring 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3292126 ']' 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3292126 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3292126 ']' 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3292126 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3292126 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3292126' 00:20:54.190 killing process with pid 3292126 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3292126 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3292126 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.190 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.776 00:20:56.776 real 0m12.167s 00:20:56.776 user 0m7.842s 00:20:56.776 sys 0m6.430s 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:56.776 ************************************ 00:20:56.776 END TEST nvmf_control_msg_list 00:20:56.776 ************************************ 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.776 ************************************ 00:20:56.776 START TEST nvmf_wait_for_buf 00:20:56.776 ************************************ 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:56.776 * Looking for test storage... 00:20:56.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.776 --rc genhtml_branch_coverage=1 00:20:56.776 --rc genhtml_function_coverage=1 00:20:56.776 --rc genhtml_legend=1 00:20:56.776 --rc geninfo_all_blocks=1 00:20:56.776 --rc geninfo_unexecuted_blocks=1 00:20:56.776 00:20:56.776 ' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.776 --rc genhtml_branch_coverage=1 00:20:56.776 --rc genhtml_function_coverage=1 00:20:56.776 --rc genhtml_legend=1 00:20:56.776 --rc geninfo_all_blocks=1 00:20:56.776 --rc geninfo_unexecuted_blocks=1 00:20:56.776 00:20:56.776 ' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.776 --rc genhtml_branch_coverage=1 00:20:56.776 --rc genhtml_function_coverage=1 00:20:56.776 --rc genhtml_legend=1 00:20:56.776 --rc geninfo_all_blocks=1 00:20:56.776 --rc geninfo_unexecuted_blocks=1 00:20:56.776 00:20:56.776 ' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:56.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.776 --rc genhtml_branch_coverage=1 00:20:56.776 --rc genhtml_function_coverage=1 00:20:56.776 --rc genhtml_legend=1 00:20:56.776 --rc geninfo_all_blocks=1 00:20:56.776 --rc geninfo_unexecuted_blocks=1 00:20:56.776 00:20:56.776 ' 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.776 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.777 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:03.363 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:03.363 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:03.363 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.363 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:03.363 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.364 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.624 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.624 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.624 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.624 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.624 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.624 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.624 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.624 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.624 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.624 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.624 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:21:03.624 00:21:03.624 --- 10.0.0.2 ping statistics --- 00:21:03.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.624 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:21:03.624 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:21:03.884 00:21:03.884 --- 10.0.0.1 ping statistics --- 00:21:03.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.884 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3296809 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3296809 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3296809 ']' 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.884 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:03.884 [2024-11-06 11:02:55.149203] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:21:03.884 [2024-11-06 11:02:55.149271] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.884 [2024-11-06 11:02:55.230762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.884 [2024-11-06 11:02:55.271152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.884 [2024-11-06 11:02:55.271189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.885 [2024-11-06 11:02:55.271198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.885 [2024-11-06 11:02:55.271204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.885 [2024-11-06 11:02:55.271210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.885 [2024-11-06 11:02:55.271801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 Malloc0 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 [2024-11-06 11:02:56.054587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.826 [2024-11-06 11:02:56.078785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.826 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:04.826 [2024-11-06 11:02:56.180310] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:06.737 Initializing NVMe Controllers 00:21:06.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:06.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:06.737 Initialization complete. Launching workers. 00:21:06.737 ======================================================== 00:21:06.737 Latency(us) 00:21:06.737 Device Information : IOPS MiB/s Average min max 00:21:06.737 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 166002.43 47876.54 191552.03 00:21:06.737 ======================================================== 00:21:06.737 Total : 25.00 3.12 166002.43 47876.54 191552.03 00:21:06.737 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.737 rmmod nvme_tcp 00:21:06.737 rmmod nvme_fabrics 00:21:06.737 rmmod nvme_keyring 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3296809 ']' 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3296809 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3296809 ']' 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3296809 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3296809 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3296809' 00:21:06.737 killing process with pid 3296809 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3296809 00:21:06.737 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3296809 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.737 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:09.279 00:21:09.279 real 0m12.394s 00:21:09.279 user 0m5.019s 00:21:09.279 sys 0m5.879s 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.279 ************************************ 00:21:09.279 END TEST nvmf_wait_for_buf 00:21:09.279 ************************************ 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.279 11:03:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:17.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:17.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:17.421 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.421 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:17.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.422 ************************************ 00:21:17.422 START TEST nvmf_perf_adq 00:21:17.422 ************************************ 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:17.422 * Looking for test storage... 00:21:17.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.422 --rc genhtml_branch_coverage=1 00:21:17.422 --rc genhtml_function_coverage=1 00:21:17.422 --rc genhtml_legend=1 00:21:17.422 --rc geninfo_all_blocks=1 00:21:17.422 --rc geninfo_unexecuted_blocks=1 00:21:17.422 00:21:17.422 ' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.422 --rc genhtml_branch_coverage=1 00:21:17.422 --rc genhtml_function_coverage=1 00:21:17.422 --rc genhtml_legend=1 00:21:17.422 --rc geninfo_all_blocks=1 00:21:17.422 --rc geninfo_unexecuted_blocks=1 00:21:17.422 00:21:17.422 ' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.422 --rc genhtml_branch_coverage=1 00:21:17.422 --rc genhtml_function_coverage=1 00:21:17.422 --rc genhtml_legend=1 00:21:17.422 --rc geninfo_all_blocks=1 00:21:17.422 --rc geninfo_unexecuted_blocks=1 00:21:17.422 00:21:17.422 ' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.422 --rc genhtml_branch_coverage=1 00:21:17.422 --rc genhtml_function_coverage=1 00:21:17.422 --rc genhtml_legend=1 00:21:17.422 --rc geninfo_all_blocks=1 00:21:17.422 --rc geninfo_unexecuted_blocks=1 00:21:17.422 00:21:17.422 ' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.422 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.423 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:24.044 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:24.044 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.044 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:24.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:24.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:24.045 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:24.616 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:26.526 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:31.812 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:31.812 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:31.812 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:31.812 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.812 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.813 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:31.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:21:31.813 00:21:31.813 --- 10.0.0.2 ping statistics --- 00:21:31.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.813 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:21:31.813 00:21:31.813 --- 10.0.0.1 ping statistics --- 00:21:31.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.813 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3307608 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3307608 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3307608 ']' 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.813 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.075 [2024-11-06 11:03:23.275152] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:21:32.075 [2024-11-06 11:03:23.275203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.075 [2024-11-06 11:03:23.355442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.075 [2024-11-06 11:03:23.392263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.075 [2024-11-06 11:03:23.392298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.075 [2024-11-06 11:03:23.392306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.075 [2024-11-06 11:03:23.392313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.075 [2024-11-06 11:03:23.392319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.075 [2024-11-06 11:03:23.393780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.075 [2024-11-06 11:03:23.393846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.075 [2024-11-06 11:03:23.394011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.075 [2024-11-06 11:03:23.394011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.075 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:32.075 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:32.075 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.075 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.075 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 [2024-11-06 11:03:23.630131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 Malloc1 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:32.335 [2024-11-06 11:03:23.699119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3307640 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:32.335 11:03:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:34.878 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:34.878 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.878 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.878 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.878 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:34.878 "tick_rate": 2400000000, 00:21:34.878 "poll_groups": [ 00:21:34.878 { 00:21:34.878 "name": "nvmf_tgt_poll_group_000", 00:21:34.878 "admin_qpairs": 1, 00:21:34.878 "io_qpairs": 1, 00:21:34.878 "current_admin_qpairs": 1, 00:21:34.878 "current_io_qpairs": 1, 00:21:34.878 "pending_bdev_io": 0, 00:21:34.878 "completed_nvme_io": 19505, 00:21:34.878 "transports": [ 00:21:34.878 { 00:21:34.878 "trtype": "TCP" 00:21:34.879 } 00:21:34.879 ] 00:21:34.879 }, 00:21:34.879 { 00:21:34.879 "name": "nvmf_tgt_poll_group_001", 00:21:34.879 "admin_qpairs": 0, 00:21:34.879 "io_qpairs": 1, 00:21:34.879 "current_admin_qpairs": 0, 00:21:34.879 "current_io_qpairs": 1, 00:21:34.879 "pending_bdev_io": 0, 00:21:34.879 "completed_nvme_io": 27539, 00:21:34.879 "transports": [ 00:21:34.879 { 00:21:34.879 "trtype": "TCP" 00:21:34.879 } 00:21:34.879 ] 00:21:34.879 }, 00:21:34.879 { 00:21:34.879 "name": "nvmf_tgt_poll_group_002", 00:21:34.879 "admin_qpairs": 0, 00:21:34.879 "io_qpairs": 1, 00:21:34.879 "current_admin_qpairs": 0, 00:21:34.879 "current_io_qpairs": 1, 00:21:34.879 "pending_bdev_io": 0, 00:21:34.879 "completed_nvme_io": 20467, 00:21:34.879 "transports": [ 00:21:34.879 { 00:21:34.879 "trtype": "TCP" 00:21:34.879 } 00:21:34.879 ] 00:21:34.879 }, 00:21:34.879 { 00:21:34.879 "name": "nvmf_tgt_poll_group_003", 00:21:34.879 "admin_qpairs": 0, 00:21:34.879 "io_qpairs": 1, 00:21:34.879 "current_admin_qpairs": 0, 00:21:34.879 "current_io_qpairs": 1, 00:21:34.879 "pending_bdev_io": 0, 00:21:34.879 "completed_nvme_io": 19825, 00:21:34.879 "transports": [ 00:21:34.879 { 00:21:34.879 "trtype": "TCP" 00:21:34.879 } 00:21:34.879 ] 00:21:34.879 } 00:21:34.879 ] 00:21:34.879 }' 00:21:34.879 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:34.879 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:34.879 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:34.879 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:34.879 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3307640 00:21:43.022 Initializing NVMe Controllers 00:21:43.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:43.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:43.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:43.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:43.022 Initialization complete. Launching workers. 00:21:43.022 ======================================================== 00:21:43.022 Latency(us) 00:21:43.022 Device Information : IOPS MiB/s Average min max 00:21:43.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10936.00 42.72 5853.65 1158.04 10204.85 00:21:43.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14903.80 58.22 4293.61 1202.32 9969.41 00:21:43.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13904.40 54.31 4602.73 1553.14 11183.00 00:21:43.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12619.10 49.29 5072.36 1386.83 10043.04 00:21:43.022 ======================================================== 00:21:43.022 Total : 52363.30 204.54 4889.18 1158.04 11183.00 00:21:43.022 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.022 rmmod nvme_tcp 00:21:43.022 rmmod nvme_fabrics 00:21:43.022 rmmod nvme_keyring 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3307608 ']' 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3307608 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3307608 ']' 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3307608 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:43.022 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3307608 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3307608' 00:21:43.022 killing process with pid 3307608 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3307608 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3307608 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.022 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.934 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.934 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:44.934 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:44.934 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:46.320 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:48.233 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:53.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:53.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.525 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:53.526 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:53.526 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:21:53.526 00:21:53.526 --- 10.0.0.2 ping statistics --- 00:21:53.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.526 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:21:53.526 00:21:53.526 --- 10.0.0.1 ping statistics --- 00:21:53.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.526 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:53.526 net.core.busy_poll = 1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:53.526 net.core.busy_read = 1 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:53.526 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3312260 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3312260 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3312260 ']' 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:53.788 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.050 [2024-11-06 11:03:45.228317] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:21:54.050 [2024-11-06 11:03:45.228389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.050 [2024-11-06 11:03:45.311449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.050 [2024-11-06 11:03:45.353398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.050 [2024-11-06 11:03:45.353437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.050 [2024-11-06 11:03:45.353445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.050 [2024-11-06 11:03:45.353452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.050 [2024-11-06 11:03:45.353458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.050 [2024-11-06 11:03:45.355061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.050 [2024-11-06 11:03:45.355178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.050 [2024-11-06 11:03:45.355332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.050 [2024-11-06 11:03:45.355334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.624 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:54.624 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:54.624 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.624 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.624 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.884 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.885 [2024-11-06 11:03:46.196904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.885 Malloc1 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.885 [2024-11-06 11:03:46.267115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3312461 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:54.885 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:57.432 "tick_rate": 2400000000, 00:21:57.432 "poll_groups": [ 00:21:57.432 { 00:21:57.432 "name": "nvmf_tgt_poll_group_000", 00:21:57.432 "admin_qpairs": 1, 00:21:57.432 "io_qpairs": 4, 00:21:57.432 "current_admin_qpairs": 1, 00:21:57.432 "current_io_qpairs": 4, 00:21:57.432 "pending_bdev_io": 0, 00:21:57.432 "completed_nvme_io": 35512, 00:21:57.432 "transports": [ 00:21:57.432 { 00:21:57.432 "trtype": "TCP" 00:21:57.432 } 00:21:57.432 ] 00:21:57.432 }, 00:21:57.432 { 00:21:57.432 "name": "nvmf_tgt_poll_group_001", 00:21:57.432 "admin_qpairs": 0, 00:21:57.432 "io_qpairs": 0, 00:21:57.432 "current_admin_qpairs": 0, 00:21:57.432 "current_io_qpairs": 0, 00:21:57.432 "pending_bdev_io": 0, 00:21:57.432 "completed_nvme_io": 0, 00:21:57.432 "transports": [ 00:21:57.432 { 00:21:57.432 "trtype": "TCP" 00:21:57.432 } 00:21:57.432 ] 00:21:57.432 }, 00:21:57.432 { 00:21:57.432 "name": "nvmf_tgt_poll_group_002", 00:21:57.432 "admin_qpairs": 0, 00:21:57.432 "io_qpairs": 0, 00:21:57.432 "current_admin_qpairs": 0, 00:21:57.432 "current_io_qpairs": 0, 00:21:57.432 "pending_bdev_io": 0, 00:21:57.432 "completed_nvme_io": 0, 00:21:57.432 "transports": [ 00:21:57.432 { 00:21:57.432 "trtype": "TCP" 00:21:57.432 } 00:21:57.432 ] 00:21:57.432 }, 00:21:57.432 { 00:21:57.432 "name": "nvmf_tgt_poll_group_003", 00:21:57.432 "admin_qpairs": 0, 00:21:57.432 "io_qpairs": 0, 00:21:57.432 "current_admin_qpairs": 0, 00:21:57.432 "current_io_qpairs": 0, 00:21:57.432 "pending_bdev_io": 0, 00:21:57.432 "completed_nvme_io": 0, 00:21:57.432 "transports": [ 00:21:57.432 { 00:21:57.432 "trtype": "TCP" 00:21:57.432 } 00:21:57.432 ] 00:21:57.432 } 00:21:57.432 ] 00:21:57.432 }' 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:57.432 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3312461 00:22:05.568 Initializing NVMe Controllers 00:22:05.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:05.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:05.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:05.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:05.568 Initialization complete. Launching workers. 00:22:05.568 ======================================================== 00:22:05.568 Latency(us) 00:22:05.568 Device Information : IOPS MiB/s Average min max 00:22:05.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6581.80 25.71 9724.97 1332.52 58833.56 00:22:05.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6492.60 25.36 9857.26 1108.74 59492.32 00:22:05.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5216.60 20.38 12306.13 1975.36 59075.31 00:22:05.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6135.00 23.96 10432.32 1289.17 59398.74 00:22:05.568 ======================================================== 00:22:05.568 Total : 24426.00 95.41 10489.05 1108.74 59492.32 00:22:05.568 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.568 rmmod nvme_tcp 00:22:05.568 rmmod nvme_fabrics 00:22:05.568 rmmod nvme_keyring 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3312260 ']' 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3312260 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3312260 ']' 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3312260 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3312260 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3312260' 00:22:05.568 killing process with pid 3312260 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3312260 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3312260 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.568 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:08.959 00:22:08.959 real 0m52.455s 00:22:08.959 user 2m47.482s 00:22:08.959 sys 0m10.778s 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.959 ************************************ 00:22:08.959 END TEST nvmf_perf_adq 00:22:08.959 ************************************ 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:08.959 ************************************ 00:22:08.959 START TEST nvmf_shutdown 00:22:08.959 ************************************ 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:08.959 * Looking for test storage... 00:22:08.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:08.959 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.959 --rc genhtml_branch_coverage=1 00:22:08.959 --rc genhtml_function_coverage=1 00:22:08.959 --rc genhtml_legend=1 00:22:08.959 --rc geninfo_all_blocks=1 00:22:08.959 --rc geninfo_unexecuted_blocks=1 00:22:08.959 00:22:08.959 ' 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.959 --rc genhtml_branch_coverage=1 00:22:08.959 --rc genhtml_function_coverage=1 00:22:08.959 --rc genhtml_legend=1 00:22:08.959 --rc geninfo_all_blocks=1 00:22:08.959 --rc geninfo_unexecuted_blocks=1 00:22:08.959 00:22:08.959 ' 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.959 --rc genhtml_branch_coverage=1 00:22:08.959 --rc genhtml_function_coverage=1 00:22:08.959 --rc genhtml_legend=1 00:22:08.959 --rc geninfo_all_blocks=1 00:22:08.959 --rc geninfo_unexecuted_blocks=1 00:22:08.959 00:22:08.959 ' 00:22:08.959 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.959 --rc genhtml_branch_coverage=1 00:22:08.959 --rc genhtml_function_coverage=1 00:22:08.959 --rc genhtml_legend=1 00:22:08.959 --rc geninfo_all_blocks=1 00:22:08.960 --rc geninfo_unexecuted_blocks=1 00:22:08.960 00:22:08.960 ' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.960 ************************************ 00:22:08.960 START TEST nvmf_shutdown_tc1 00:22:08.960 ************************************ 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.960 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.099 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:17.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:17.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:17.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:17.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:22:17.100 00:22:17.100 --- 10.0.0.2 ping statistics --- 00:22:17.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.100 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:22:17.100 00:22:17.100 --- 10.0.0.1 ping statistics --- 00:22:17.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.100 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3318928 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3318928 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3318928 ']' 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:17.100 11:04:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.100 [2024-11-06 11:04:07.640206] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:17.100 [2024-11-06 11:04:07.640274] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.101 [2024-11-06 11:04:07.740168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.101 [2024-11-06 11:04:07.792448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.101 [2024-11-06 11:04:07.792505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.101 [2024-11-06 11:04:07.792514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.101 [2024-11-06 11:04:07.792522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.101 [2024-11-06 11:04:07.792528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.101 [2024-11-06 11:04:07.794539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.101 [2024-11-06 11:04:07.794710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.101 [2024-11-06 11:04:07.794846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.101 [2024-11-06 11:04:07.794846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.101 [2024-11-06 11:04:08.494752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.101 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.361 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.361 Malloc1 00:22:17.361 [2024-11-06 11:04:08.621410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.361 Malloc2 00:22:17.361 Malloc3 00:22:17.361 Malloc4 00:22:17.361 Malloc5 00:22:17.622 Malloc6 00:22:17.622 Malloc7 00:22:17.622 Malloc8 00:22:17.622 Malloc9 00:22:17.622 Malloc10 00:22:17.622 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.622 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:17.622 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.622 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3319319 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3319319 /var/tmp/bdevperf.sock 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3319319 ']' 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.622 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.622 { 00:22:17.622 "params": { 00:22:17.622 "name": "Nvme$subsystem", 00:22:17.622 "trtype": "$TEST_TRANSPORT", 00:22:17.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.622 "adrfam": "ipv4", 00:22:17.622 "trsvcid": "$NVMF_PORT", 00:22:17.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.622 "hdgst": ${hdgst:-false}, 00:22:17.622 "ddgst": ${ddgst:-false} 00:22:17.622 }, 00:22:17.622 "method": "bdev_nvme_attach_controller" 00:22:17.622 } 00:22:17.622 EOF 00:22:17.622 )") 00:22:17.623 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.623 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.623 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.623 { 00:22:17.623 "params": { 00:22:17.623 "name": "Nvme$subsystem", 00:22:17.623 "trtype": "$TEST_TRANSPORT", 00:22:17.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.623 "adrfam": "ipv4", 00:22:17.623 "trsvcid": "$NVMF_PORT", 00:22:17.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.623 "hdgst": ${hdgst:-false}, 00:22:17.623 "ddgst": ${ddgst:-false} 00:22:17.623 }, 00:22:17.623 "method": "bdev_nvme_attach_controller" 00:22:17.623 } 00:22:17.623 EOF 00:22:17.623 )") 00:22:17.623 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.883 { 00:22:17.883 "params": { 00:22:17.883 "name": "Nvme$subsystem", 00:22:17.883 "trtype": "$TEST_TRANSPORT", 00:22:17.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.883 "adrfam": "ipv4", 00:22:17.883 "trsvcid": "$NVMF_PORT", 00:22:17.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.883 "hdgst": ${hdgst:-false}, 00:22:17.883 "ddgst": ${ddgst:-false} 00:22:17.883 }, 00:22:17.883 "method": "bdev_nvme_attach_controller" 00:22:17.883 } 00:22:17.883 EOF 00:22:17.883 )") 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.883 { 00:22:17.883 "params": { 00:22:17.883 "name": "Nvme$subsystem", 00:22:17.883 "trtype": "$TEST_TRANSPORT", 00:22:17.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.883 "adrfam": "ipv4", 00:22:17.883 "trsvcid": "$NVMF_PORT", 00:22:17.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.883 "hdgst": ${hdgst:-false}, 00:22:17.883 "ddgst": ${ddgst:-false} 00:22:17.883 }, 00:22:17.883 "method": "bdev_nvme_attach_controller" 00:22:17.883 } 00:22:17.883 EOF 00:22:17.883 )") 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.883 { 00:22:17.883 "params": { 00:22:17.883 "name": "Nvme$subsystem", 00:22:17.883 "trtype": "$TEST_TRANSPORT", 00:22:17.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.883 "adrfam": "ipv4", 00:22:17.883 "trsvcid": "$NVMF_PORT", 00:22:17.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.883 "hdgst": ${hdgst:-false}, 00:22:17.883 "ddgst": ${ddgst:-false} 00:22:17.883 }, 00:22:17.883 "method": "bdev_nvme_attach_controller" 00:22:17.883 } 00:22:17.883 EOF 00:22:17.883 )") 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.883 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.883 { 00:22:17.883 "params": { 00:22:17.883 "name": "Nvme$subsystem", 00:22:17.883 "trtype": "$TEST_TRANSPORT", 00:22:17.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.883 "adrfam": "ipv4", 00:22:17.883 "trsvcid": "$NVMF_PORT", 00:22:17.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.884 "hdgst": ${hdgst:-false}, 00:22:17.884 "ddgst": ${ddgst:-false} 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 } 00:22:17.884 EOF 00:22:17.884 )") 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.884 [2024-11-06 11:04:09.073100] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:17.884 [2024-11-06 11:04:09.073154] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.884 { 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme$subsystem", 00:22:17.884 "trtype": "$TEST_TRANSPORT", 00:22:17.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "$NVMF_PORT", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.884 "hdgst": ${hdgst:-false}, 00:22:17.884 "ddgst": ${ddgst:-false} 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 } 00:22:17.884 EOF 00:22:17.884 )") 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.884 { 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme$subsystem", 00:22:17.884 "trtype": "$TEST_TRANSPORT", 00:22:17.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "$NVMF_PORT", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.884 "hdgst": ${hdgst:-false}, 00:22:17.884 "ddgst": ${ddgst:-false} 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 } 00:22:17.884 EOF 00:22:17.884 )") 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.884 { 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme$subsystem", 00:22:17.884 "trtype": "$TEST_TRANSPORT", 00:22:17.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "$NVMF_PORT", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.884 "hdgst": ${hdgst:-false}, 00:22:17.884 "ddgst": ${ddgst:-false} 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 } 00:22:17.884 EOF 00:22:17.884 )") 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.884 { 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme$subsystem", 00:22:17.884 "trtype": "$TEST_TRANSPORT", 00:22:17.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "$NVMF_PORT", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.884 "hdgst": ${hdgst:-false}, 00:22:17.884 "ddgst": ${ddgst:-false} 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 } 00:22:17.884 EOF 00:22:17.884 )") 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:17.884 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme1", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme2", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme3", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme4", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme5", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme6", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme7", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme8", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme9", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 },{ 00:22:17.884 "params": { 00:22:17.884 "name": "Nvme10", 00:22:17.884 "trtype": "tcp", 00:22:17.884 "traddr": "10.0.0.2", 00:22:17.884 "adrfam": "ipv4", 00:22:17.884 "trsvcid": "4420", 00:22:17.884 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:17.884 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:17.884 "hdgst": false, 00:22:17.884 "ddgst": false 00:22:17.884 }, 00:22:17.884 "method": "bdev_nvme_attach_controller" 00:22:17.884 }' 00:22:17.884 [2024-11-06 11:04:09.145054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.885 [2024-11-06 11:04:09.181222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3319319 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:19.268 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:20.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3319319 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3318928 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.652 { 00:22:20.652 "params": { 00:22:20.652 "name": "Nvme$subsystem", 00:22:20.652 "trtype": "$TEST_TRANSPORT", 00:22:20.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.652 "adrfam": "ipv4", 00:22:20.652 "trsvcid": "$NVMF_PORT", 00:22:20.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.652 "hdgst": ${hdgst:-false}, 00:22:20.652 "ddgst": ${ddgst:-false} 00:22:20.652 }, 00:22:20.652 "method": "bdev_nvme_attach_controller" 00:22:20.652 } 00:22:20.652 EOF 00:22:20.652 )") 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.652 { 00:22:20.652 "params": { 00:22:20.652 "name": "Nvme$subsystem", 00:22:20.652 "trtype": "$TEST_TRANSPORT", 00:22:20.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.652 "adrfam": "ipv4", 00:22:20.652 "trsvcid": "$NVMF_PORT", 00:22:20.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.652 "hdgst": ${hdgst:-false}, 00:22:20.652 "ddgst": ${ddgst:-false} 00:22:20.652 }, 00:22:20.652 "method": "bdev_nvme_attach_controller" 00:22:20.652 } 00:22:20.652 EOF 00:22:20.652 )") 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.652 { 00:22:20.652 "params": { 00:22:20.652 "name": "Nvme$subsystem", 00:22:20.652 "trtype": "$TEST_TRANSPORT", 00:22:20.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.652 "adrfam": "ipv4", 00:22:20.652 "trsvcid": "$NVMF_PORT", 00:22:20.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.652 "hdgst": ${hdgst:-false}, 00:22:20.652 "ddgst": ${ddgst:-false} 00:22:20.652 }, 00:22:20.652 "method": "bdev_nvme_attach_controller" 00:22:20.652 } 00:22:20.652 EOF 00:22:20.652 )") 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.652 { 00:22:20.652 "params": { 00:22:20.652 "name": "Nvme$subsystem", 00:22:20.652 "trtype": "$TEST_TRANSPORT", 00:22:20.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.652 "adrfam": "ipv4", 00:22:20.652 "trsvcid": "$NVMF_PORT", 00:22:20.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.652 "hdgst": ${hdgst:-false}, 00:22:20.652 "ddgst": ${ddgst:-false} 00:22:20.652 }, 00:22:20.652 "method": "bdev_nvme_attach_controller" 00:22:20.652 } 00:22:20.652 EOF 00:22:20.652 )") 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.652 { 00:22:20.652 "params": { 00:22:20.652 "name": "Nvme$subsystem", 00:22:20.652 "trtype": "$TEST_TRANSPORT", 00:22:20.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.652 "adrfam": "ipv4", 00:22:20.652 "trsvcid": "$NVMF_PORT", 00:22:20.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.652 "hdgst": ${hdgst:-false}, 00:22:20.652 "ddgst": ${ddgst:-false} 00:22:20.652 }, 00:22:20.652 "method": "bdev_nvme_attach_controller" 00:22:20.652 } 00:22:20.652 EOF 00:22:20.652 )") 00:22:20.652 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.653 { 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme$subsystem", 00:22:20.653 "trtype": "$TEST_TRANSPORT", 00:22:20.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "$NVMF_PORT", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.653 "hdgst": ${hdgst:-false}, 00:22:20.653 "ddgst": ${ddgst:-false} 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 } 00:22:20.653 EOF 00:22:20.653 )") 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.653 { 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme$subsystem", 00:22:20.653 "trtype": "$TEST_TRANSPORT", 00:22:20.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "$NVMF_PORT", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.653 "hdgst": ${hdgst:-false}, 00:22:20.653 "ddgst": ${ddgst:-false} 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 } 00:22:20.653 EOF 00:22:20.653 )") 00:22:20.653 [2024-11-06 11:04:11.715349] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:20.653 [2024-11-06 11:04:11.715403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319818 ] 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.653 { 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme$subsystem", 00:22:20.653 "trtype": "$TEST_TRANSPORT", 00:22:20.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "$NVMF_PORT", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.653 "hdgst": ${hdgst:-false}, 00:22:20.653 "ddgst": ${ddgst:-false} 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 } 00:22:20.653 EOF 00:22:20.653 )") 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.653 { 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme$subsystem", 00:22:20.653 "trtype": "$TEST_TRANSPORT", 00:22:20.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "$NVMF_PORT", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.653 "hdgst": ${hdgst:-false}, 00:22:20.653 "ddgst": ${ddgst:-false} 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 } 00:22:20.653 EOF 00:22:20.653 )") 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.653 { 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme$subsystem", 00:22:20.653 "trtype": "$TEST_TRANSPORT", 00:22:20.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "$NVMF_PORT", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.653 "hdgst": ${hdgst:-false}, 00:22:20.653 "ddgst": ${ddgst:-false} 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 } 00:22:20.653 EOF 00:22:20.653 )") 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:20.653 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme1", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme2", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme3", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme4", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme5", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme6", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme7", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme8", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme9", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 },{ 00:22:20.653 "params": { 00:22:20.653 "name": "Nvme10", 00:22:20.653 "trtype": "tcp", 00:22:20.653 "traddr": "10.0.0.2", 00:22:20.653 "adrfam": "ipv4", 00:22:20.653 "trsvcid": "4420", 00:22:20.653 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:20.653 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:20.653 "hdgst": false, 00:22:20.653 "ddgst": false 00:22:20.653 }, 00:22:20.653 "method": "bdev_nvme_attach_controller" 00:22:20.653 }' 00:22:20.653 [2024-11-06 11:04:11.787498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.654 [2024-11-06 11:04:11.823551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.036 Running I/O for 1 seconds... 00:22:23.239 1800.00 IOPS, 112.50 MiB/s 00:22:23.239 Latency(us) 00:22:23.239 [2024-11-06T10:04:14.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.239 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme1n1 : 1.13 225.57 14.10 0.00 0.00 280503.25 21299.20 248162.99 00:22:23.240 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme2n1 : 1.15 223.25 13.95 0.00 0.00 278925.65 17148.59 253405.87 00:22:23.240 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme3n1 : 1.11 231.48 14.47 0.00 0.00 263986.56 17367.04 255153.49 00:22:23.240 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme4n1 : 1.11 234.28 14.64 0.00 0.00 254497.71 6908.59 248162.99 00:22:23.240 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme5n1 : 1.15 222.54 13.91 0.00 0.00 265345.07 26651.31 255153.49 00:22:23.240 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme6n1 : 1.14 224.72 14.04 0.00 0.00 257758.93 19223.89 251658.24 00:22:23.240 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme7n1 : 1.15 277.11 17.32 0.00 0.00 205369.26 10431.15 237677.23 00:22:23.240 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme8n1 : 1.19 269.79 16.86 0.00 0.00 207274.33 9939.63 228939.09 00:22:23.240 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme9n1 : 1.16 220.90 13.81 0.00 0.00 248367.36 17367.04 297096.53 00:22:23.240 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.240 Verification LBA range: start 0x0 length 0x400 00:22:23.240 Nvme10n1 : 1.20 267.02 16.69 0.00 0.00 202743.77 6635.52 277872.64 00:22:23.240 [2024-11-06T10:04:14.662Z] =================================================================================================================== 00:22:23.240 [2024-11-06T10:04:14.662Z] Total : 2396.66 149.79 0.00 0.00 243608.27 6635.52 297096.53 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.500 rmmod nvme_tcp 00:22:23.500 rmmod nvme_fabrics 00:22:23.500 rmmod nvme_keyring 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3318928 ']' 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3318928 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3318928 ']' 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3318928 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3318928 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3318928' 00:22:23.500 killing process with pid 3318928 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3318928 00:22:23.500 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3318928 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.761 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.306 00:22:26.306 real 0m16.987s 00:22:26.306 user 0m35.534s 00:22:26.306 sys 0m6.742s 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.306 ************************************ 00:22:26.306 END TEST nvmf_shutdown_tc1 00:22:26.306 ************************************ 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:26.306 ************************************ 00:22:26.306 START TEST nvmf_shutdown_tc2 00:22:26.306 ************************************ 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.306 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:26.307 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:26.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:26.307 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:26.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:22:26.307 00:22:26.307 --- 10.0.0.2 ping statistics --- 00:22:26.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.307 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:26.307 00:22:26.307 --- 10.0.0.1 ping statistics --- 00:22:26.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.307 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.307 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3321119 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3321119 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3321119 ']' 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:26.308 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.308 [2024-11-06 11:04:17.655899] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:26.308 [2024-11-06 11:04:17.655952] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.569 [2024-11-06 11:04:17.747423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.569 [2024-11-06 11:04:17.777762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.569 [2024-11-06 11:04:17.777791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.569 [2024-11-06 11:04:17.777797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.569 [2024-11-06 11:04:17.777802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.569 [2024-11-06 11:04:17.777806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.569 [2024-11-06 11:04:17.779211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.569 [2024-11-06 11:04:17.779374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.569 [2024-11-06 11:04:17.779532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.569 [2024-11-06 11:04:17.779535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.141 [2024-11-06 11:04:18.495569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.141 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.402 Malloc1 00:22:27.402 [2024-11-06 11:04:18.608038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.402 Malloc2 00:22:27.402 Malloc3 00:22:27.402 Malloc4 00:22:27.402 Malloc5 00:22:27.402 Malloc6 00:22:27.402 Malloc7 00:22:27.664 Malloc8 00:22:27.664 Malloc9 00:22:27.664 Malloc10 00:22:27.664 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.664 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:27.664 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.664 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3321484 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3321484 /var/tmp/bdevperf.sock 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3321484 ']' 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.664 { 00:22:27.664 "params": { 00:22:27.664 "name": "Nvme$subsystem", 00:22:27.664 "trtype": "$TEST_TRANSPORT", 00:22:27.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.664 "adrfam": "ipv4", 00:22:27.664 "trsvcid": "$NVMF_PORT", 00:22:27.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.664 "hdgst": ${hdgst:-false}, 00:22:27.664 "ddgst": ${ddgst:-false} 00:22:27.664 }, 00:22:27.664 "method": "bdev_nvme_attach_controller" 00:22:27.664 } 00:22:27.664 EOF 00:22:27.664 )") 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.664 { 00:22:27.664 "params": { 00:22:27.664 "name": "Nvme$subsystem", 00:22:27.664 "trtype": "$TEST_TRANSPORT", 00:22:27.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.664 "adrfam": "ipv4", 00:22:27.664 "trsvcid": "$NVMF_PORT", 00:22:27.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.664 "hdgst": ${hdgst:-false}, 00:22:27.664 "ddgst": ${ddgst:-false} 00:22:27.664 }, 00:22:27.664 "method": "bdev_nvme_attach_controller" 00:22:27.664 } 00:22:27.664 EOF 00:22:27.664 )") 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.664 { 00:22:27.664 "params": { 00:22:27.664 "name": "Nvme$subsystem", 00:22:27.664 "trtype": "$TEST_TRANSPORT", 00:22:27.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.664 "adrfam": "ipv4", 00:22:27.664 "trsvcid": "$NVMF_PORT", 00:22:27.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.664 "hdgst": ${hdgst:-false}, 00:22:27.664 "ddgst": ${ddgst:-false} 00:22:27.664 }, 00:22:27.664 "method": "bdev_nvme_attach_controller" 00:22:27.664 } 00:22:27.664 EOF 00:22:27.664 )") 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.664 { 00:22:27.664 "params": { 00:22:27.664 "name": "Nvme$subsystem", 00:22:27.664 "trtype": "$TEST_TRANSPORT", 00:22:27.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.664 "adrfam": "ipv4", 00:22:27.664 "trsvcid": "$NVMF_PORT", 00:22:27.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.664 "hdgst": ${hdgst:-false}, 00:22:27.664 "ddgst": ${ddgst:-false} 00:22:27.664 }, 00:22:27.664 "method": "bdev_nvme_attach_controller" 00:22:27.664 } 00:22:27.664 EOF 00:22:27.664 )") 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.664 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.664 { 00:22:27.664 "params": { 00:22:27.664 "name": "Nvme$subsystem", 00:22:27.665 "trtype": "$TEST_TRANSPORT", 00:22:27.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.665 "adrfam": "ipv4", 00:22:27.665 "trsvcid": "$NVMF_PORT", 00:22:27.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.665 "hdgst": ${hdgst:-false}, 00:22:27.665 "ddgst": ${ddgst:-false} 00:22:27.665 }, 00:22:27.665 "method": "bdev_nvme_attach_controller" 00:22:27.665 } 00:22:27.665 EOF 00:22:27.665 )") 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.665 { 00:22:27.665 "params": { 00:22:27.665 "name": "Nvme$subsystem", 00:22:27.665 "trtype": "$TEST_TRANSPORT", 00:22:27.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.665 "adrfam": "ipv4", 00:22:27.665 "trsvcid": "$NVMF_PORT", 00:22:27.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.665 "hdgst": ${hdgst:-false}, 00:22:27.665 "ddgst": ${ddgst:-false} 00:22:27.665 }, 00:22:27.665 "method": "bdev_nvme_attach_controller" 00:22:27.665 } 00:22:27.665 EOF 00:22:27.665 )") 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.665 [2024-11-06 11:04:19.055877] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:27.665 [2024-11-06 11:04:19.055934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321484 ] 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.665 { 00:22:27.665 "params": { 00:22:27.665 "name": "Nvme$subsystem", 00:22:27.665 "trtype": "$TEST_TRANSPORT", 00:22:27.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.665 "adrfam": "ipv4", 00:22:27.665 "trsvcid": "$NVMF_PORT", 00:22:27.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.665 "hdgst": ${hdgst:-false}, 00:22:27.665 "ddgst": ${ddgst:-false} 00:22:27.665 }, 00:22:27.665 "method": "bdev_nvme_attach_controller" 00:22:27.665 } 00:22:27.665 EOF 00:22:27.665 )") 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.665 { 00:22:27.665 "params": { 00:22:27.665 "name": "Nvme$subsystem", 00:22:27.665 "trtype": "$TEST_TRANSPORT", 00:22:27.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.665 "adrfam": "ipv4", 00:22:27.665 "trsvcid": "$NVMF_PORT", 00:22:27.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.665 "hdgst": ${hdgst:-false}, 00:22:27.665 "ddgst": ${ddgst:-false} 00:22:27.665 }, 00:22:27.665 "method": "bdev_nvme_attach_controller" 00:22:27.665 } 00:22:27.665 EOF 00:22:27.665 )") 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.665 { 00:22:27.665 "params": { 00:22:27.665 "name": "Nvme$subsystem", 00:22:27.665 "trtype": "$TEST_TRANSPORT", 00:22:27.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.665 "adrfam": "ipv4", 00:22:27.665 "trsvcid": "$NVMF_PORT", 00:22:27.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.665 "hdgst": ${hdgst:-false}, 00:22:27.665 "ddgst": ${ddgst:-false} 00:22:27.665 }, 00:22:27.665 "method": "bdev_nvme_attach_controller" 00:22:27.665 } 00:22:27.665 EOF 00:22:27.665 )") 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.665 { 00:22:27.665 "params": { 00:22:27.665 "name": "Nvme$subsystem", 00:22:27.665 "trtype": "$TEST_TRANSPORT", 00:22:27.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.665 "adrfam": "ipv4", 00:22:27.665 "trsvcid": "$NVMF_PORT", 00:22:27.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.665 "hdgst": ${hdgst:-false}, 00:22:27.665 "ddgst": ${ddgst:-false} 00:22:27.665 }, 00:22:27.665 "method": "bdev_nvme_attach_controller" 00:22:27.665 } 00:22:27.665 EOF 00:22:27.665 )") 00:22:27.665 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:27.925 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:27.925 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:27.925 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:27.925 "params": { 00:22:27.925 "name": "Nvme1", 00:22:27.925 "trtype": "tcp", 00:22:27.925 "traddr": "10.0.0.2", 00:22:27.925 "adrfam": "ipv4", 00:22:27.925 "trsvcid": "4420", 00:22:27.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.925 "hdgst": false, 00:22:27.925 "ddgst": false 00:22:27.925 }, 00:22:27.925 "method": "bdev_nvme_attach_controller" 00:22:27.925 },{ 00:22:27.925 "params": { 00:22:27.925 "name": "Nvme2", 00:22:27.925 "trtype": "tcp", 00:22:27.925 "traddr": "10.0.0.2", 00:22:27.925 "adrfam": "ipv4", 00:22:27.925 "trsvcid": "4420", 00:22:27.925 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.925 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.925 "hdgst": false, 00:22:27.925 "ddgst": false 00:22:27.925 }, 00:22:27.925 "method": "bdev_nvme_attach_controller" 00:22:27.925 },{ 00:22:27.925 "params": { 00:22:27.925 "name": "Nvme3", 00:22:27.925 "trtype": "tcp", 00:22:27.925 "traddr": "10.0.0.2", 00:22:27.925 "adrfam": "ipv4", 00:22:27.925 "trsvcid": "4420", 00:22:27.925 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.925 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.925 "hdgst": false, 00:22:27.925 "ddgst": false 00:22:27.925 }, 00:22:27.925 "method": "bdev_nvme_attach_controller" 00:22:27.925 },{ 00:22:27.925 "params": { 00:22:27.925 "name": "Nvme4", 00:22:27.925 "trtype": "tcp", 00:22:27.925 "traddr": "10.0.0.2", 00:22:27.925 "adrfam": "ipv4", 00:22:27.926 "trsvcid": "4420", 00:22:27.926 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.926 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.926 "hdgst": false, 00:22:27.926 "ddgst": false 00:22:27.926 }, 00:22:27.926 "method": "bdev_nvme_attach_controller" 00:22:27.926 },{ 00:22:27.926 "params": { 00:22:27.926 "name": "Nvme5", 00:22:27.926 "trtype": "tcp", 00:22:27.926 "traddr": "10.0.0.2", 00:22:27.926 "adrfam": "ipv4", 00:22:27.926 "trsvcid": "4420", 00:22:27.926 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.926 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.926 "hdgst": false, 00:22:27.926 "ddgst": false 00:22:27.926 }, 00:22:27.926 "method": "bdev_nvme_attach_controller" 00:22:27.926 },{ 00:22:27.926 "params": { 00:22:27.926 "name": "Nvme6", 00:22:27.926 "trtype": "tcp", 00:22:27.926 "traddr": "10.0.0.2", 00:22:27.926 "adrfam": "ipv4", 00:22:27.926 "trsvcid": "4420", 00:22:27.926 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.926 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.926 "hdgst": false, 00:22:27.926 "ddgst": false 00:22:27.926 }, 00:22:27.926 "method": "bdev_nvme_attach_controller" 00:22:27.926 },{ 00:22:27.926 "params": { 00:22:27.926 "name": "Nvme7", 00:22:27.926 "trtype": "tcp", 00:22:27.926 "traddr": "10.0.0.2", 00:22:27.926 "adrfam": "ipv4", 00:22:27.926 "trsvcid": "4420", 00:22:27.926 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.926 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.926 "hdgst": false, 00:22:27.926 "ddgst": false 00:22:27.926 }, 00:22:27.926 "method": "bdev_nvme_attach_controller" 00:22:27.926 },{ 00:22:27.926 "params": { 00:22:27.926 "name": "Nvme8", 00:22:27.926 "trtype": "tcp", 00:22:27.926 "traddr": "10.0.0.2", 00:22:27.926 "adrfam": "ipv4", 00:22:27.926 "trsvcid": "4420", 00:22:27.926 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.926 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.926 "hdgst": false, 00:22:27.926 "ddgst": false 00:22:27.926 }, 00:22:27.926 "method": "bdev_nvme_attach_controller" 00:22:27.926 },{ 00:22:27.926 "params": { 00:22:27.926 "name": "Nvme9", 00:22:27.926 "trtype": "tcp", 00:22:27.926 "traddr": "10.0.0.2", 00:22:27.926 "adrfam": "ipv4", 00:22:27.926 "trsvcid": "4420", 00:22:27.926 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.926 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.926 "hdgst": false, 00:22:27.926 "ddgst": false 00:22:27.926 }, 00:22:27.926 "method": "bdev_nvme_attach_controller" 00:22:27.926 },{ 00:22:27.926 "params": { 00:22:27.926 "name": "Nvme10", 00:22:27.926 "trtype": "tcp", 00:22:27.926 "traddr": "10.0.0.2", 00:22:27.926 "adrfam": "ipv4", 00:22:27.926 "trsvcid": "4420", 00:22:27.926 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.926 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.926 "hdgst": false, 00:22:27.926 "ddgst": false 00:22:27.926 }, 00:22:27.926 "method": "bdev_nvme_attach_controller" 00:22:27.926 }' 00:22:27.926 [2024-11-06 11:04:19.127653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.926 [2024-11-06 11:04:19.163625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.842 Running I/O for 10 seconds... 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3321484 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3321484 ']' 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3321484 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3321484 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3321484' 00:22:30.414 killing process with pid 3321484 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3321484 00:22:30.414 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3321484 00:22:30.414 Received shutdown signal, test time was about 0.873551 seconds 00:22:30.414 00:22:30.414 Latency(us) 00:22:30.414 [2024-11-06T10:04:21.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.414 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme1n1 : 0.85 225.18 14.07 0.00 0.00 280318.86 33204.91 265639.25 00:22:30.414 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme2n1 : 0.85 224.62 14.04 0.00 0.00 274825.96 18677.76 234181.97 00:22:30.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme3n1 : 0.87 293.36 18.34 0.00 0.00 205585.49 23156.05 228939.09 00:22:30.414 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme4n1 : 0.84 228.96 14.31 0.00 0.00 256556.94 36263.25 223696.21 00:22:30.414 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme5n1 : 0.86 222.20 13.89 0.00 0.00 258459.88 21080.75 256901.12 00:22:30.414 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme6n1 : 0.84 227.34 14.21 0.00 0.00 245500.59 39540.05 272629.76 00:22:30.414 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme7n1 : 0.85 227.09 14.19 0.00 0.00 238423.32 16274.77 246415.36 00:22:30.414 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme8n1 : 0.87 294.28 18.39 0.00 0.00 180733.87 14636.37 249910.61 00:22:30.414 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme9n1 : 0.87 221.38 13.84 0.00 0.00 233152.00 15400.96 265639.25 00:22:30.414 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.414 Verification LBA range: start 0x0 length 0x400 00:22:30.414 Nvme10n1 : 0.86 223.86 13.99 0.00 0.00 223978.38 18786.99 222822.40 00:22:30.414 [2024-11-06T10:04:21.836Z] =================================================================================================================== 00:22:30.414 [2024-11-06T10:04:21.836Z] Total : 2388.26 149.27 0.00 0.00 236841.41 14636.37 272629.76 00:22:30.676 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3321119 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:31.617 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.618 rmmod nvme_tcp 00:22:31.618 rmmod nvme_fabrics 00:22:31.618 rmmod nvme_keyring 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3321119 ']' 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3321119 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3321119 ']' 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3321119 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:31.618 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3321119 00:22:31.618 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:31.878 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:31.878 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3321119' 00:22:31.879 killing process with pid 3321119 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3321119 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3321119 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.879 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.427 00:22:34.427 real 0m8.095s 00:22:34.427 user 0m24.976s 00:22:34.427 sys 0m1.299s 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.427 ************************************ 00:22:34.427 END TEST nvmf_shutdown_tc2 00:22:34.427 ************************************ 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.427 ************************************ 00:22:34.427 START TEST nvmf_shutdown_tc3 00:22:34.427 ************************************ 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:34.427 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:34.427 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:34.428 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:34.428 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:34.428 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:22:34.428 00:22:34.428 --- 10.0.0.2 ping statistics --- 00:22:34.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.428 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:22:34.428 00:22:34.428 --- 10.0.0.1 ping statistics --- 00:22:34.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.428 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3322886 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3322886 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3322886 ']' 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:34.428 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.689 [2024-11-06 11:04:25.873280] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:34.689 [2024-11-06 11:04:25.873367] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.689 [2024-11-06 11:04:25.969253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.689 [2024-11-06 11:04:26.003135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.689 [2024-11-06 11:04:26.003169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.689 [2024-11-06 11:04:26.003174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.689 [2024-11-06 11:04:26.003179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.689 [2024-11-06 11:04:26.003183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.689 [2024-11-06 11:04:26.004735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.689 [2024-11-06 11:04:26.004882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.689 [2024-11-06 11:04:26.005188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.689 [2024-11-06 11:04:26.005188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.260 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:35.260 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:35.260 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.260 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.260 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.521 [2024-11-06 11:04:26.707670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.521 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.521 Malloc1 00:22:35.521 [2024-11-06 11:04:26.823381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.521 Malloc2 00:22:35.521 Malloc3 00:22:35.521 Malloc4 00:22:35.781 Malloc5 00:22:35.781 Malloc6 00:22:35.781 Malloc7 00:22:35.781 Malloc8 00:22:35.781 Malloc9 00:22:35.781 Malloc10 00:22:35.781 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.781 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:35.781 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.782 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3323130 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3323130 /var/tmp/bdevperf.sock 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3323130 ']' 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.043 { 00:22:36.043 "params": { 00:22:36.043 "name": "Nvme$subsystem", 00:22:36.043 "trtype": "$TEST_TRANSPORT", 00:22:36.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.043 "adrfam": "ipv4", 00:22:36.043 "trsvcid": "$NVMF_PORT", 00:22:36.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.043 "hdgst": ${hdgst:-false}, 00:22:36.043 "ddgst": ${ddgst:-false} 00:22:36.043 }, 00:22:36.043 "method": "bdev_nvme_attach_controller" 00:22:36.043 } 00:22:36.043 EOF 00:22:36.043 )") 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.043 { 00:22:36.043 "params": { 00:22:36.043 "name": "Nvme$subsystem", 00:22:36.043 "trtype": "$TEST_TRANSPORT", 00:22:36.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.043 "adrfam": "ipv4", 00:22:36.043 "trsvcid": "$NVMF_PORT", 00:22:36.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.043 "hdgst": ${hdgst:-false}, 00:22:36.043 "ddgst": ${ddgst:-false} 00:22:36.043 }, 00:22:36.043 "method": "bdev_nvme_attach_controller" 00:22:36.043 } 00:22:36.043 EOF 00:22:36.043 )") 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.043 { 00:22:36.043 "params": { 00:22:36.043 "name": "Nvme$subsystem", 00:22:36.043 "trtype": "$TEST_TRANSPORT", 00:22:36.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.043 "adrfam": "ipv4", 00:22:36.043 "trsvcid": "$NVMF_PORT", 00:22:36.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.043 "hdgst": ${hdgst:-false}, 00:22:36.043 "ddgst": ${ddgst:-false} 00:22:36.043 }, 00:22:36.043 "method": "bdev_nvme_attach_controller" 00:22:36.043 } 00:22:36.043 EOF 00:22:36.043 )") 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.043 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.043 { 00:22:36.043 "params": { 00:22:36.043 "name": "Nvme$subsystem", 00:22:36.043 "trtype": "$TEST_TRANSPORT", 00:22:36.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.043 "adrfam": "ipv4", 00:22:36.043 "trsvcid": "$NVMF_PORT", 00:22:36.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.044 "hdgst": ${hdgst:-false}, 00:22:36.044 "ddgst": ${ddgst:-false} 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 } 00:22:36.044 EOF 00:22:36.044 )") 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.044 { 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme$subsystem", 00:22:36.044 "trtype": "$TEST_TRANSPORT", 00:22:36.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "$NVMF_PORT", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.044 "hdgst": ${hdgst:-false}, 00:22:36.044 "ddgst": ${ddgst:-false} 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 } 00:22:36.044 EOF 00:22:36.044 )") 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.044 { 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme$subsystem", 00:22:36.044 "trtype": "$TEST_TRANSPORT", 00:22:36.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "$NVMF_PORT", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.044 "hdgst": ${hdgst:-false}, 00:22:36.044 "ddgst": ${ddgst:-false} 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 } 00:22:36.044 EOF 00:22:36.044 )") 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.044 [2024-11-06 11:04:27.278298] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:36.044 [2024-11-06 11:04:27.278353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323130 ] 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.044 { 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme$subsystem", 00:22:36.044 "trtype": "$TEST_TRANSPORT", 00:22:36.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "$NVMF_PORT", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.044 "hdgst": ${hdgst:-false}, 00:22:36.044 "ddgst": ${ddgst:-false} 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 } 00:22:36.044 EOF 00:22:36.044 )") 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.044 { 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme$subsystem", 00:22:36.044 "trtype": "$TEST_TRANSPORT", 00:22:36.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "$NVMF_PORT", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.044 "hdgst": ${hdgst:-false}, 00:22:36.044 "ddgst": ${ddgst:-false} 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 } 00:22:36.044 EOF 00:22:36.044 )") 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.044 { 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme$subsystem", 00:22:36.044 "trtype": "$TEST_TRANSPORT", 00:22:36.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "$NVMF_PORT", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.044 "hdgst": ${hdgst:-false}, 00:22:36.044 "ddgst": ${ddgst:-false} 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 } 00:22:36.044 EOF 00:22:36.044 )") 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.044 { 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme$subsystem", 00:22:36.044 "trtype": "$TEST_TRANSPORT", 00:22:36.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "$NVMF_PORT", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.044 "hdgst": ${hdgst:-false}, 00:22:36.044 "ddgst": ${ddgst:-false} 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 } 00:22:36.044 EOF 00:22:36.044 )") 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:36.044 11:04:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme1", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 },{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme2", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 },{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme3", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 },{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme4", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 },{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme5", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 },{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme6", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 },{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme7", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.044 }, 00:22:36.044 "method": "bdev_nvme_attach_controller" 00:22:36.044 },{ 00:22:36.044 "params": { 00:22:36.044 "name": "Nvme8", 00:22:36.044 "trtype": "tcp", 00:22:36.044 "traddr": "10.0.0.2", 00:22:36.044 "adrfam": "ipv4", 00:22:36.044 "trsvcid": "4420", 00:22:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:36.044 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:36.044 "hdgst": false, 00:22:36.044 "ddgst": false 00:22:36.045 }, 00:22:36.045 "method": "bdev_nvme_attach_controller" 00:22:36.045 },{ 00:22:36.045 "params": { 00:22:36.045 "name": "Nvme9", 00:22:36.045 "trtype": "tcp", 00:22:36.045 "traddr": "10.0.0.2", 00:22:36.045 "adrfam": "ipv4", 00:22:36.045 "trsvcid": "4420", 00:22:36.045 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:36.045 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:36.045 "hdgst": false, 00:22:36.045 "ddgst": false 00:22:36.045 }, 00:22:36.045 "method": "bdev_nvme_attach_controller" 00:22:36.045 },{ 00:22:36.045 "params": { 00:22:36.045 "name": "Nvme10", 00:22:36.045 "trtype": "tcp", 00:22:36.045 "traddr": "10.0.0.2", 00:22:36.045 "adrfam": "ipv4", 00:22:36.045 "trsvcid": "4420", 00:22:36.045 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:36.045 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:36.045 "hdgst": false, 00:22:36.045 "ddgst": false 00:22:36.045 }, 00:22:36.045 "method": "bdev_nvme_attach_controller" 00:22:36.045 }' 00:22:36.045 [2024-11-06 11:04:27.350525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.045 [2024-11-06 11:04:27.386755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.956 Running I/O for 10 seconds... 00:22:37.956 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:37.956 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:37.956 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.956 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.956 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.956 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:38.216 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.216 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:38.216 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:38.216 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3322886 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3322886 ']' 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3322886 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3322886 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3322886' 00:22:38.491 killing process with pid 3322886 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3322886 00:22:38.491 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3322886 00:22:38.491 [2024-11-06 11:04:29.777899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.491 [2024-11-06 11:04:29.777950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.491 [2024-11-06 11:04:29.777957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.491 [2024-11-06 11:04:29.777962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.491 [2024-11-06 11:04:29.777968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.491 [2024-11-06 11:04:29.777973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.491 [2024-11-06 11:04:29.777978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.777983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.777988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.777993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.778231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8640 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.492 [2024-11-06 11:04:29.779303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.779483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daf30 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.493 [2024-11-06 11:04:29.780579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.780691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b10 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.494 [2024-11-06 11:04:29.781876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.781880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.781885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8fe0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.782750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9850 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.782765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9850 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9d20 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.783986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.495 [2024-11-06 11:04:29.784256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da6e0 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.784996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.496 [2024-11-06 11:04:29.785037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22daa60 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.789448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9c70 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.789581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfda00 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.789669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2610 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.789767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbc050 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.789869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe043c0 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.789956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.789989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.789996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98acb0 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.790041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6750 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.790130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97fc10 is same with the state(6) to be set 00:22:38.497 [2024-11-06 11:04:29.790218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.497 [2024-11-06 11:04:29.790274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.497 [2024-11-06 11:04:29.790281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9819f0 is same with the state(6) to be set 00:22:38.498 [2024-11-06 11:04:29.790310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.498 [2024-11-06 11:04:29.790319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.790327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.498 [2024-11-06 11:04:29.790335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.790343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.498 [2024-11-06 11:04:29.790351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.790359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.498 [2024-11-06 11:04:29.790367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.790374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988fc0 is same with the state(6) to be set 00:22:38.498 [2024-11-06 11:04:29.790994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.498 [2024-11-06 11:04:29.791541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.498 [2024-11-06 11:04:29.791550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.791983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.791993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:38.499 [2024-11-06 11:04:29.792251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.499 [2024-11-06 11:04:29.792333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.499 [2024-11-06 11:04:29.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.792740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.792755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.804988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.500 [2024-11-06 11:04:29.805248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.500 [2024-11-06 11:04:29.805257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.501 [2024-11-06 11:04:29.805925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.501 [2024-11-06 11:04:29.805934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.805942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.805951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.805959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.805969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.805977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.805987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.805994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.502 [2024-11-06 11:04:29.806550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.502 [2024-11-06 11:04:29.806557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.806892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.806900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.807189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9c70 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfda00 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2610 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbc050 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe043c0 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98acb0 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb6750 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97fc10 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9819f0 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.807344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x988fc0 (9): Bad file descriptor 00:22:38.503 [2024-11-06 11:04:29.813178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:38.503 [2024-11-06 11:04:29.813214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:38.503 [2024-11-06 11:04:29.813226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:38.503 [2024-11-06 11:04:29.814316] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.503 [2024-11-06 11:04:29.814369] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.503 [2024-11-06 11:04:29.814407] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.503 [2024-11-06 11:04:29.814744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.503 [2024-11-06 11:04:29.814768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb6750 with addr=10.0.0.2, port=4420 00:22:38.503 [2024-11-06 11:04:29.814777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6750 is same with the state(6) to be set 00:22:38.503 [2024-11-06 11:04:29.815237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.503 [2024-11-06 11:04:29.815277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2610 with addr=10.0.0.2, port=4420 00:22:38.503 [2024-11-06 11:04:29.815288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2610 is same with the state(6) to be set 00:22:38.503 [2024-11-06 11:04:29.815622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.503 [2024-11-06 11:04:29.815634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97fc10 with addr=10.0.0.2, port=4420 00:22:38.503 [2024-11-06 11:04:29.815642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97fc10 is same with the state(6) to be set 00:22:38.503 [2024-11-06 11:04:29.815692] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.503 [2024-11-06 11:04:29.815735] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.503 [2024-11-06 11:04:29.815783] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:38.503 [2024-11-06 11:04:29.816110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.816125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.816142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.816151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.816161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.816168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.816178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.816185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.816195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.816208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.816217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.503 [2024-11-06 11:04:29.816225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.503 [2024-11-06 11:04:29.816234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.504 [2024-11-06 11:04:29.816877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.504 [2024-11-06 11:04:29.816887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.816894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.816904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.816912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.816921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.816929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.816938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.816946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.816955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.816963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.816973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.816980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.816989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.816997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ff60 is same with the state(6) to be set 00:22:38.505 [2024-11-06 11:04:29.817312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:38.505 [2024-11-06 11:04:29.817346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb6750 (9): Bad file descriptor 00:22:38.505 [2024-11-06 11:04:29.817358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2610 (9): Bad file descriptor 00:22:38.505 [2024-11-06 11:04:29.817367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97fc10 (9): Bad file descriptor 00:22:38.505 [2024-11-06 11:04:29.817412] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:38.505 [2024-11-06 11:04:29.817458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.505 [2024-11-06 11:04:29.817642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.505 [2024-11-06 11:04:29.817651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.817986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.817994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.506 [2024-11-06 11:04:29.818232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.506 [2024-11-06 11:04:29.818241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.818555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.818563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe77fe0 is same with the state(6) to be set 00:22:38.507 [2024-11-06 11:04:29.818620] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:38.507 [2024-11-06 11:04:29.819928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:38.507 [2024-11-06 11:04:29.820162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.507 [2024-11-06 11:04:29.820178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbc050 with addr=10.0.0.2, port=4420 00:22:38.507 [2024-11-06 11:04:29.820188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbc050 is same with the state(6) to be set 00:22:38.507 [2024-11-06 11:04:29.820198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:38.507 [2024-11-06 11:04:29.820206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:38.507 [2024-11-06 11:04:29.820218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:38.507 [2024-11-06 11:04:29.820228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:38.507 [2024-11-06 11:04:29.820237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:38.507 [2024-11-06 11:04:29.820245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:38.507 [2024-11-06 11:04:29.820253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:38.507 [2024-11-06 11:04:29.820261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:38.507 [2024-11-06 11:04:29.820268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:38.507 [2024-11-06 11:04:29.820274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:38.507 [2024-11-06 11:04:29.820281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:38.507 [2024-11-06 11:04:29.820288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:38.507 [2024-11-06 11:04:29.821526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.507 [2024-11-06 11:04:29.821713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.507 [2024-11-06 11:04:29.821721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.821986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.821995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-06 11:04:29.822380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.508 [2024-11-06 11:04:29.822389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.822648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.822656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe792d0 is same with the state(6) to be set 00:22:38.509 [2024-11-06 11:04:29.823929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.823943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.823956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.823967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.823979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.823987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.823996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-06 11:04:29.824276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.509 [2024-11-06 11:04:29.824286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.510 [2024-11-06 11:04:29.824813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.510 [2024-11-06 11:04:29.824821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.824989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.824998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.825006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.825015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.825023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.825032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8f790 is same with the state(6) to be set 00:22:38.511 [2024-11-06 11:04:29.826341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.511 [2024-11-06 11:04:29.826805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.511 [2024-11-06 11:04:29.826812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.826990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.826999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.512 [2024-11-06 11:04:29.827427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.512 [2024-11-06 11:04:29.827437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.827444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.827453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91390 is same with the state(6) to be set 00:22:38.513 [2024-11-06 11:04:29.828725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.828989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.828996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.513 [2024-11-06 11:04:29.829352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.513 [2024-11-06 11:04:29.829362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.514 [2024-11-06 11:04:29.829827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.514 [2024-11-06 11:04:29.829836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd928c0 is same with the state(6) to be set 00:22:38.514 [2024-11-06 11:04:29.831550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:38.514 [2024-11-06 11:04:29.831575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:38.514 [2024-11-06 11:04:29.831585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:38.514 [2024-11-06 11:04:29.831596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:38.514 [2024-11-06 11:04:29.831850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.514 [2024-11-06 11:04:29.831864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe043c0 with addr=10.0.0.2, port=4420 00:22:38.514 [2024-11-06 11:04:29.831872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe043c0 is same with the state(6) to be set 00:22:38.514 [2024-11-06 11:04:29.831884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbc050 (9): Bad file descriptor 00:22:38.514 [2024-11-06 11:04:29.831930] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:38.514 [2024-11-06 11:04:29.831948] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:38.514 [2024-11-06 11:04:29.831958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe043c0 (9): Bad file descriptor 00:22:38.514 task offset: 30080 on job bdev=Nvme4n1 fails 00:22:38.514 00:22:38.514 Latency(us) 00:22:38.514 [2024-11-06T10:04:29.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.514 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.514 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:38.514 Verification LBA range: start 0x0 length 0x400 00:22:38.514 Nvme1n1 : 0.97 198.28 12.39 66.09 0.00 239364.59 9175.04 270882.13 00:22:38.514 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.514 Job: Nvme2n1 ended in about 0.97 seconds with error 00:22:38.514 Verification LBA range: start 0x0 length 0x400 00:22:38.514 Nvme2n1 : 0.97 131.86 8.24 65.93 0.00 313607.11 15728.64 251658.24 00:22:38.514 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.514 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:38.514 Verification LBA range: start 0x0 length 0x400 00:22:38.514 Nvme3n1 : 0.97 197.31 12.33 65.77 0.00 230873.81 18240.85 248162.99 00:22:38.514 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.514 Job: Nvme4n1 ended in about 0.96 seconds with error 00:22:38.514 Verification LBA range: start 0x0 length 0x400 00:22:38.515 Nvme4n1 : 0.96 200.93 12.56 66.98 0.00 221667.41 16820.91 248162.99 00:22:38.515 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.515 Verification LBA range: start 0x0 length 0x400 00:22:38.515 Nvme5n1 : 0.96 200.02 12.50 0.00 0.00 290655.86 19442.35 270882.13 00:22:38.515 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.515 Job: Nvme6n1 ended in about 0.96 seconds with error 00:22:38.515 Verification LBA range: start 0x0 length 0x400 00:22:38.515 Nvme6n1 : 0.96 200.68 12.54 66.89 0.00 212295.68 15400.96 234181.97 00:22:38.515 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.515 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:38.515 Verification LBA range: start 0x0 length 0x400 00:22:38.515 Nvme7n1 : 0.96 200.43 12.53 66.81 0.00 207760.21 20862.29 228939.09 00:22:38.515 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.515 Job: Nvme8n1 ended in about 0.97 seconds with error 00:22:38.515 Verification LBA range: start 0x0 length 0x400 00:22:38.515 Nvme8n1 : 0.97 198.61 12.41 66.20 0.00 205080.32 17257.81 249910.61 00:22:38.515 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.515 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:38.515 Verification LBA range: start 0x0 length 0x400 00:22:38.515 Nvme9n1 : 0.98 131.22 8.20 65.61 0.00 270028.80 19114.67 242920.11 00:22:38.515 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.515 Job: Nvme10n1 ended in about 0.98 seconds with error 00:22:38.515 Verification LBA range: start 0x0 length 0x400 00:22:38.515 Nvme10n1 : 0.98 130.90 8.18 65.45 0.00 264493.80 21845.33 272629.76 00:22:38.515 [2024-11-06T10:04:29.937Z] =================================================================================================================== 00:22:38.515 [2024-11-06T10:04:29.937Z] Total : 1790.23 111.89 595.73 0.00 241236.80 9175.04 272629.76 00:22:38.515 [2024-11-06 11:04:29.858990] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:38.515 [2024-11-06 11:04:29.859039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:38.515 [2024-11-06 11:04:29.859323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.859343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98acb0 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.859354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98acb0 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.859663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.859673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9819f0 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.859680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9819f0 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.859760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.859770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x988fc0 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.859777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988fc0 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.859951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.859961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf9c70 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.859968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf9c70 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.859979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:38.515 [2024-11-06 11:04:29.859986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:38.515 [2024-11-06 11:04:29.860002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:38.515 [2024-11-06 11:04:29.860011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:38.515 [2024-11-06 11:04:29.861376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:38.515 [2024-11-06 11:04:29.861391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:38.515 [2024-11-06 11:04:29.861400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:38.515 [2024-11-06 11:04:29.861621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.861635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdfda00 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.861643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfda00 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.861655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98acb0 (9): Bad file descriptor 00:22:38.515 [2024-11-06 11:04:29.861668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9819f0 (9): Bad file descriptor 00:22:38.515 [2024-11-06 11:04:29.861677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x988fc0 (9): Bad file descriptor 00:22:38.515 [2024-11-06 11:04:29.861686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9c70 (9): Bad file descriptor 00:22:38.515 [2024-11-06 11:04:29.861694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:38.515 [2024-11-06 11:04:29.861701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:38.515 [2024-11-06 11:04:29.861708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:38.515 [2024-11-06 11:04:29.861715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:38.515 [2024-11-06 11:04:29.861761] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:38.515 [2024-11-06 11:04:29.861777] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:38.515 [2024-11-06 11:04:29.861788] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:38.515 [2024-11-06 11:04:29.861800] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:38.515 [2024-11-06 11:04:29.862261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.862278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97fc10 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.862286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97fc10 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.862486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.862496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2610 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.862504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2610 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.862657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.515 [2024-11-06 11:04:29.862667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb6750 with addr=10.0.0.2, port=4420 00:22:38.515 [2024-11-06 11:04:29.862678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6750 is same with the state(6) to be set 00:22:38.515 [2024-11-06 11:04:29.862687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfda00 (9): Bad file descriptor 00:22:38.515 [2024-11-06 11:04:29.862696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:38.515 [2024-11-06 11:04:29.862703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:38.515 [2024-11-06 11:04:29.862711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:38.515 [2024-11-06 11:04:29.862718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.862725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.862731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.862738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.862745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.862757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.862764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.862771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.862778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.862786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.862792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.862799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.862805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.862874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:38.516 [2024-11-06 11:04:29.862885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:38.516 [2024-11-06 11:04:29.862907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97fc10 (9): Bad file descriptor 00:22:38.516 [2024-11-06 11:04:29.862917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2610 (9): Bad file descriptor 00:22:38.516 [2024-11-06 11:04:29.862926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb6750 (9): Bad file descriptor 00:22:38.516 [2024-11-06 11:04:29.862935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.862941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.862948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.862955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.863291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.516 [2024-11-06 11:04:29.863303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbc050 with addr=10.0.0.2, port=4420 00:22:38.516 [2024-11-06 11:04:29.863314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbc050 is same with the state(6) to be set 00:22:38.516 [2024-11-06 11:04:29.863387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.516 [2024-11-06 11:04:29.863397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe043c0 with addr=10.0.0.2, port=4420 00:22:38.516 [2024-11-06 11:04:29.863404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe043c0 is same with the state(6) to be set 00:22:38.516 [2024-11-06 11:04:29.863412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.863418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.863425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.863431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.863439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.863445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.863452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.863458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.864146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.864157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.864166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.864175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.864207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbc050 (9): Bad file descriptor 00:22:38.516 [2024-11-06 11:04:29.864219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe043c0 (9): Bad file descriptor 00:22:38.516 [2024-11-06 11:04:29.864251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.864260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.864268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.864275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:38.516 [2024-11-06 11:04:29.864284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:38.516 [2024-11-06 11:04:29.864292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:38.516 [2024-11-06 11:04:29.864300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:38.516 [2024-11-06 11:04:29.864308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:38.776 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3323130 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3323130 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3323130 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.719 rmmod nvme_tcp 00:22:39.719 rmmod nvme_fabrics 00:22:39.719 rmmod nvme_keyring 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3322886 ']' 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3322886 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3322886 ']' 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3322886 00:22:39.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3322886) - No such process 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3322886 is not found' 00:22:39.719 Process with pid 3322886 is not found 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.719 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.264 00:22:42.264 real 0m7.771s 00:22:42.264 user 0m18.998s 00:22:42.264 sys 0m1.234s 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.264 ************************************ 00:22:42.264 END TEST nvmf_shutdown_tc3 00:22:42.264 ************************************ 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.264 ************************************ 00:22:42.264 START TEST nvmf_shutdown_tc4 00:22:42.264 ************************************ 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.264 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:22:42.265 00:22:42.265 --- 10.0.0.2 ping statistics --- 00:22:42.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.265 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:22:42.265 00:22:42.265 --- 10.0.0.1 ping statistics --- 00:22:42.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.265 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3324482 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3324482 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3324482 ']' 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.265 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:42.526 [2024-11-06 11:04:33.701023] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:42.526 [2024-11-06 11:04:33.701074] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.526 [2024-11-06 11:04:33.793145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.526 [2024-11-06 11:04:33.825629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.526 [2024-11-06 11:04:33.825663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.526 [2024-11-06 11:04:33.825668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.526 [2024-11-06 11:04:33.825674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.526 [2024-11-06 11:04:33.825678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.526 [2024-11-06 11:04:33.827183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.526 [2024-11-06 11:04:33.827340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.526 [2024-11-06 11:04:33.827494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.526 [2024-11-06 11:04:33.827496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:43.098 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.098 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:22:43.098 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.098 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.098 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:43.359 [2024-11-06 11:04:34.541158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.359 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.360 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:43.360 Malloc1 00:22:43.360 [2024-11-06 11:04:34.656407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.360 Malloc2 00:22:43.360 Malloc3 00:22:43.360 Malloc4 00:22:43.621 Malloc5 00:22:43.621 Malloc6 00:22:43.621 Malloc7 00:22:43.621 Malloc8 00:22:43.621 Malloc9 00:22:43.621 Malloc10 00:22:43.621 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.621 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:43.621 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.621 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:43.881 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3324864 00:22:43.881 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:43.881 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:43.881 [2024-11-06 11:04:35.124380] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3324482 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3324482 ']' 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3324482 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3324482 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3324482' 00:22:49.174 killing process with pid 3324482 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3324482 00:22:49.174 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3324482 00:22:49.174 [2024-11-06 11:04:40.129478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687c20 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.129520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687c20 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.129526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687c20 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.129532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687c20 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 [2024-11-06 11:04:40.130175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6885c0 is same with the state(6) to be set 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.174 starting I/O failed: -6 00:22:49.174 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 [2024-11-06 11:04:40.134603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 [2024-11-06 11:04:40.135532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.135913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.135934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 [2024-11-06 11:04:40.135940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 [2024-11-06 11:04:40.135945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 [2024-11-06 11:04:40.135949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.135955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 starting I/O failed: -6 00:22:49.175 [2024-11-06 11:04:40.135960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 [2024-11-06 11:04:40.135965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.135970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b57f0 is same with the state(6) to be set 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.136172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with tstarting I/O failed: -6 00:22:49.175 he state(6) to be set 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.136196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 [2024-11-06 11:04:40.136202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 [2024-11-06 11:04:40.136207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.136212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 [2024-11-06 11:04:40.136217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 starting I/O failed: -6 00:22:49.175 [2024-11-06 11:04:40.136222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.136228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 starting I/O failed: -6 00:22:49.175 [2024-11-06 11:04:40.136237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.136242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5cc0 is same with the state(6) to be set 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 starting I/O failed: -6 00:22:49.175 Write completed with error (sct=0, sc=8) 00:22:49.175 [2024-11-06 11:04:40.136427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b61b0 is same with the state(6) to be set 00:22:49.175 starting I/O failed: -6 00:22:49.176 [2024-11-06 11:04:40.136445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b61b0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.136451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b61b0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.136456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b61b0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.136461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b61b0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.136457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.176 [2024-11-06 11:04:40.136466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b61b0 is same with the state(6) to be set 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 [2024-11-06 11:04:40.136663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5320 is same with tWrite completed with error (sct=0, sc=8) 00:22:49.176 he state(6) to be set 00:22:49.176 starting I/O failed: -6 00:22:49.176 [2024-11-06 11:04:40.136685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5320 is same with the state(6) to be set 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 [2024-11-06 11:04:40.137712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.176 NVMe io qpair process completion error 00:22:49.176 [2024-11-06 11:04:40.138685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a2a0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.138700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a2a0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.138705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a2a0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.138710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a2a0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.138715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a2a0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.138720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a2a0 is same with the state(6) to be set 00:22:49.176 [2024-11-06 11:04:40.138725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a2a0 is same with the state(6) to be set 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 [2024-11-06 11:04:40.139575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 Write completed with error (sct=0, sc=8) 00:22:49.176 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 [2024-11-06 11:04:40.140359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 [2024-11-06 11:04:40.141297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.177 NVMe io qpair process completion error 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 starting I/O failed: -6 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 Write completed with error (sct=0, sc=8) 00:22:49.177 [2024-11-06 11:04:40.142598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.177 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 [2024-11-06 11:04:40.143412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 [2024-11-06 11:04:40.144365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.178 Write completed with error (sct=0, sc=8) 00:22:49.178 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 [2024-11-06 11:04:40.145864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.179 NVMe io qpair process completion error 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 [2024-11-06 11:04:40.147503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 [2024-11-06 11:04:40.148404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.179 starting I/O failed: -6 00:22:49.179 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 [2024-11-06 11:04:40.149325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 [2024-11-06 11:04:40.151554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.180 NVMe io qpair process completion error 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 starting I/O failed: -6 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.180 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 [2024-11-06 11:04:40.152638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 [2024-11-06 11:04:40.153474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 [2024-11-06 11:04:40.154437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.181 Write completed with error (sct=0, sc=8) 00:22:49.181 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 [2024-11-06 11:04:40.156114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.182 NVMe io qpair process completion error 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 [2024-11-06 11:04:40.157214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.182 starting I/O failed: -6 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 [2024-11-06 11:04:40.158192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.182 Write completed with error (sct=0, sc=8) 00:22:49.182 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 [2024-11-06 11:04:40.159146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 [2024-11-06 11:04:40.161227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.183 NVMe io qpair process completion error 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 starting I/O failed: -6 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.183 Write completed with error (sct=0, sc=8) 00:22:49.184 [2024-11-06 11:04:40.162438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 [2024-11-06 11:04:40.163475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 [2024-11-06 11:04:40.164427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 starting I/O failed: -6 00:22:49.184 NVMe io qpair process completion error 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 starting I/O failed: -6 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.184 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 [2024-11-06 11:04:40.166167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 [2024-11-06 11:04:40.166989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 [2024-11-06 11:04:40.167937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.185 starting I/O failed: -6 00:22:49.185 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 [2024-11-06 11:04:40.169432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.186 NVMe io qpair process completion error 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 [2024-11-06 11:04:40.170416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 [2024-11-06 11:04:40.171315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.186 Write completed with error (sct=0, sc=8) 00:22:49.186 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 [2024-11-06 11:04:40.172263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 [2024-11-06 11:04:40.175301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.187 NVMe io qpair process completion error 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.187 [2024-11-06 11:04:40.176361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 Write completed with error (sct=0, sc=8) 00:22:49.187 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 [2024-11-06 11:04:40.177188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.188 starting I/O failed: -6 00:22:49.188 starting I/O failed: -6 00:22:49.188 starting I/O failed: -6 00:22:49.188 starting I/O failed: -6 00:22:49.188 starting I/O failed: -6 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 [2024-11-06 11:04:40.178373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.188 starting I/O failed: -6 00:22:49.188 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 starting I/O failed: -6 00:22:49.189 [2024-11-06 11:04:40.180083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:49.189 NVMe io qpair process completion error 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Write completed with error (sct=0, sc=8) 00:22:49.189 Initializing NVMe Controllers 00:22:49.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:49.189 Controller IO queue size 128, less than required. 00:22:49.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.189 Controller IO queue size 128, less than required. 00:22:49.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:49.189 Controller IO queue size 128, less than required. 00:22:49.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:49.189 Controller IO queue size 128, less than required. 00:22:49.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:49.189 Controller IO queue size 128, less than required. 00:22:49.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:49.189 Controller IO queue size 128, less than required. 00:22:49.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:49.189 Controller IO queue size 128, less than required. 00:22:49.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:49.190 Controller IO queue size 128, less than required. 00:22:49.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:49.190 Controller IO queue size 128, less than required. 00:22:49.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:49.190 Controller IO queue size 128, less than required. 00:22:49.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:49.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:49.190 Initialization complete. Launching workers. 00:22:49.190 ======================================================== 00:22:49.190 Latency(us) 00:22:49.190 Device Information : IOPS MiB/s Average min max 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1905.02 81.86 67209.87 784.65 118839.25 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1871.81 80.43 67756.02 939.55 119162.41 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1863.83 80.09 68351.58 603.14 149645.27 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1896.40 81.49 66902.81 832.63 119564.60 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1891.57 81.28 67100.75 950.84 120795.16 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1855.21 79.72 68445.46 666.94 119963.04 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1884.63 80.98 67398.89 679.51 122783.24 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1878.33 80.71 67750.93 549.63 118817.82 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1902.92 81.77 66789.18 875.73 118487.91 00:22:49.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1870.34 80.37 67970.31 914.09 120042.42 00:22:49.190 ======================================================== 00:22:49.190 Total : 18820.06 808.67 67563.11 549.63 149645.27 00:22:49.190 00:22:49.190 [2024-11-06 11:04:40.188589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11efef0 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1720 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef560 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1ae0 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f0410 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11efbc0 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef890 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1900 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f0740 is same with the state(6) to be set 00:22:49.190 [2024-11-06 11:04:40.188899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f0a70 is same with the state(6) to be set 00:22:49.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:49.190 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3324864 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3324864 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3324864 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.134 rmmod nvme_tcp 00:22:50.134 rmmod nvme_fabrics 00:22:50.134 rmmod nvme_keyring 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3324482 ']' 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3324482 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3324482 ']' 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3324482 00:22:50.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3324482) - No such process 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3324482 is not found' 00:22:50.134 Process with pid 3324482 is not found 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.134 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.681 00:22:52.681 real 0m10.264s 00:22:52.681 user 0m27.954s 00:22:52.681 sys 0m4.054s 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.681 ************************************ 00:22:52.681 END TEST nvmf_shutdown_tc4 00:22:52.681 ************************************ 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:52.681 00:22:52.681 real 0m43.705s 00:22:52.681 user 1m47.725s 00:22:52.681 sys 0m13.684s 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:52.681 ************************************ 00:22:52.681 END TEST nvmf_shutdown 00:22:52.681 ************************************ 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:52.681 ************************************ 00:22:52.681 START TEST nvmf_nsid 00:22:52.681 ************************************ 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:52.681 * Looking for test storage... 00:22:52.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.681 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.682 --rc genhtml_branch_coverage=1 00:22:52.682 --rc genhtml_function_coverage=1 00:22:52.682 --rc genhtml_legend=1 00:22:52.682 --rc geninfo_all_blocks=1 00:22:52.682 --rc geninfo_unexecuted_blocks=1 00:22:52.682 00:22:52.682 ' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.682 --rc genhtml_branch_coverage=1 00:22:52.682 --rc genhtml_function_coverage=1 00:22:52.682 --rc genhtml_legend=1 00:22:52.682 --rc geninfo_all_blocks=1 00:22:52.682 --rc geninfo_unexecuted_blocks=1 00:22:52.682 00:22:52.682 ' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.682 --rc genhtml_branch_coverage=1 00:22:52.682 --rc genhtml_function_coverage=1 00:22:52.682 --rc genhtml_legend=1 00:22:52.682 --rc geninfo_all_blocks=1 00:22:52.682 --rc geninfo_unexecuted_blocks=1 00:22:52.682 00:22:52.682 ' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.682 --rc genhtml_branch_coverage=1 00:22:52.682 --rc genhtml_function_coverage=1 00:22:52.682 --rc genhtml_legend=1 00:22:52.682 --rc geninfo_all_blocks=1 00:22:52.682 --rc geninfo_unexecuted_blocks=1 00:22:52.682 00:22:52.682 ' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.682 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.272 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.273 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:22:59.534 00:22:59.534 --- 10.0.0.2 ping statistics --- 00:22:59.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.534 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:59.534 00:22:59.534 --- 10.0.0.1 ping statistics --- 00:22:59.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.534 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.534 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3330218 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3330218 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3330218 ']' 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:59.795 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:59.795 [2024-11-06 11:04:51.010185] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:22:59.795 [2024-11-06 11:04:51.010240] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.795 [2024-11-06 11:04:51.087407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.795 [2024-11-06 11:04:51.123623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.795 [2024-11-06 11:04:51.123657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.795 [2024-11-06 11:04:51.123665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.795 [2024-11-06 11:04:51.123671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.795 [2024-11-06 11:04:51.123677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.795 [2024-11-06 11:04:51.124237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3330244 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=13c4fead-ef9c-43e8-921d-5a8b1f794ca2 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=27887bac-7a8f-45d2-9520-4455f19fddcb 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=18a89c2c-c2c4-4360-9029-816f0f521eb2 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:00.738 null0 00:23:00.738 null1 00:23:00.738 null2 00:23:00.738 [2024-11-06 11:04:51.880539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.738 [2024-11-06 11:04:51.882482] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:00.738 [2024-11-06 11:04:51.882529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330244 ] 00:23:00.738 [2024-11-06 11:04:51.904739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3330244 /var/tmp/tgt2.sock 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3330244 ']' 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:00.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:00.738 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:00.738 [2024-11-06 11:04:51.969112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.738 [2024-11-06 11:04:52.004936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.999 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:00.999 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:00.999 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:01.260 [2024-11-06 11:04:52.469649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.260 [2024-11-06 11:04:52.485783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:01.260 nvme0n1 nvme0n2 00:23:01.260 nvme1n1 00:23:01.260 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:01.260 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:01.260 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.646 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:02.646 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:02.647 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:03.648 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:03.648 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:03.648 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:03.648 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:03.648 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:03.648 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 13c4fead-ef9c-43e8-921d-5a8b1f794ca2 00:23:03.648 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=13c4feadef9c43e8921d5a8b1f794ca2 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 13C4FEADEF9C43E8921D5A8B1F794CA2 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 13C4FEADEF9C43E8921D5A8B1F794CA2 == \1\3\C\4\F\E\A\D\E\F\9\C\4\3\E\8\9\2\1\D\5\A\8\B\1\F\7\9\4\C\A\2 ]] 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:03.648 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 27887bac-7a8f-45d2-9520-4455f19fddcb 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=27887bac7a8f45d295204455f19fddcb 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 27887BAC7A8F45D295204455F19FDDCB 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 27887BAC7A8F45D295204455F19FDDCB == \2\7\8\8\7\B\A\C\7\A\8\F\4\5\D\2\9\5\2\0\4\4\5\5\F\1\9\F\D\D\C\B ]] 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 18a89c2c-c2c4-4360-9029-816f0f521eb2 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=18a89c2cc2c443609029816f0f521eb2 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 18A89C2CC2C443609029816F0F521EB2 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 18A89C2CC2C443609029816F0F521EB2 == \1\8\A\8\9\C\2\C\C\2\C\4\4\3\6\0\9\0\2\9\8\1\6\F\0\F\5\2\1\E\B\2 ]] 00:23:03.956 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3330244 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3330244 ']' 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3330244 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3330244 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3330244' 00:23:04.217 killing process with pid 3330244 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3330244 00:23:04.217 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3330244 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.479 rmmod nvme_tcp 00:23:04.479 rmmod nvme_fabrics 00:23:04.479 rmmod nvme_keyring 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3330218 ']' 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3330218 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3330218 ']' 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3330218 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3330218 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3330218' 00:23:04.479 killing process with pid 3330218 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3330218 00:23:04.479 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3330218 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.741 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.654 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.654 00:23:06.654 real 0m14.325s 00:23:06.654 user 0m10.934s 00:23:06.654 sys 0m6.440s 00:23:06.654 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:06.654 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:06.654 ************************************ 00:23:06.654 END TEST nvmf_nsid 00:23:06.654 ************************************ 00:23:06.654 11:04:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:06.654 00:23:06.654 real 13m0.369s 00:23:06.654 user 27m22.955s 00:23:06.654 sys 3m50.422s 00:23:06.654 11:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:06.654 11:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:06.654 ************************************ 00:23:06.654 END TEST nvmf_target_extra 00:23:06.654 ************************************ 00:23:06.916 11:04:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:06.917 11:04:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:06.917 11:04:58 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:06.917 11:04:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.917 ************************************ 00:23:06.917 START TEST nvmf_host 00:23:06.917 ************************************ 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:06.917 * Looking for test storage... 00:23:06.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.917 --rc genhtml_branch_coverage=1 00:23:06.917 --rc genhtml_function_coverage=1 00:23:06.917 --rc genhtml_legend=1 00:23:06.917 --rc geninfo_all_blocks=1 00:23:06.917 --rc geninfo_unexecuted_blocks=1 00:23:06.917 00:23:06.917 ' 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.917 --rc genhtml_branch_coverage=1 00:23:06.917 --rc genhtml_function_coverage=1 00:23:06.917 --rc genhtml_legend=1 00:23:06.917 --rc geninfo_all_blocks=1 00:23:06.917 --rc geninfo_unexecuted_blocks=1 00:23:06.917 00:23:06.917 ' 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.917 --rc genhtml_branch_coverage=1 00:23:06.917 --rc genhtml_function_coverage=1 00:23:06.917 --rc genhtml_legend=1 00:23:06.917 --rc geninfo_all_blocks=1 00:23:06.917 --rc geninfo_unexecuted_blocks=1 00:23:06.917 00:23:06.917 ' 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.917 --rc genhtml_branch_coverage=1 00:23:06.917 --rc genhtml_function_coverage=1 00:23:06.917 --rc genhtml_legend=1 00:23:06.917 --rc geninfo_all_blocks=1 00:23:06.917 --rc geninfo_unexecuted_blocks=1 00:23:06.917 00:23:06.917 ' 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.917 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.179 ************************************ 00:23:07.179 START TEST nvmf_multicontroller 00:23:07.179 ************************************ 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.179 * Looking for test storage... 00:23:07.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:07.179 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:07.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.442 --rc genhtml_branch_coverage=1 00:23:07.442 --rc genhtml_function_coverage=1 00:23:07.442 --rc genhtml_legend=1 00:23:07.442 --rc geninfo_all_blocks=1 00:23:07.442 --rc geninfo_unexecuted_blocks=1 00:23:07.442 00:23:07.442 ' 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:07.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.442 --rc genhtml_branch_coverage=1 00:23:07.442 --rc genhtml_function_coverage=1 00:23:07.442 --rc genhtml_legend=1 00:23:07.442 --rc geninfo_all_blocks=1 00:23:07.442 --rc geninfo_unexecuted_blocks=1 00:23:07.442 00:23:07.442 ' 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:07.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.442 --rc genhtml_branch_coverage=1 00:23:07.442 --rc genhtml_function_coverage=1 00:23:07.442 --rc genhtml_legend=1 00:23:07.442 --rc geninfo_all_blocks=1 00:23:07.442 --rc geninfo_unexecuted_blocks=1 00:23:07.442 00:23:07.442 ' 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:07.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.442 --rc genhtml_branch_coverage=1 00:23:07.442 --rc genhtml_function_coverage=1 00:23:07.442 --rc genhtml_legend=1 00:23:07.442 --rc geninfo_all_blocks=1 00:23:07.442 --rc geninfo_unexecuted_blocks=1 00:23:07.442 00:23:07.442 ' 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.442 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.443 11:04:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:15.591 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:15.591 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:15.591 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.591 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:15.592 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:23:15.592 00:23:15.592 --- 10.0.0.2 ping statistics --- 00:23:15.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.592 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:15.592 00:23:15.592 --- 10.0.0.1 ping statistics --- 00:23:15.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.592 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3335352 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3335352 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3335352 ']' 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.592 11:05:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 [2024-11-06 11:05:05.976942] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:15.592 [2024-11-06 11:05:05.976997] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.592 [2024-11-06 11:05:06.075081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:15.592 [2024-11-06 11:05:06.127614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.592 [2024-11-06 11:05:06.127666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.592 [2024-11-06 11:05:06.127675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.592 [2024-11-06 11:05:06.127687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.592 [2024-11-06 11:05:06.127693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.592 [2024-11-06 11:05:06.129507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.592 [2024-11-06 11:05:06.129675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.592 [2024-11-06 11:05:06.129676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 [2024-11-06 11:05:06.832716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 Malloc0 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.592 [2024-11-06 11:05:06.903136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.592 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.593 [2024-11-06 11:05:06.915088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.593 Malloc1 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3335701 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3335701 /var/tmp/bdevperf.sock 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3335701 ']' 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.593 11:05:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.535 11:05:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.535 11:05:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:16.535 11:05:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:16.535 11:05:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.535 11:05:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.797 NVMe0n1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.797 1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.797 request: 00:23:16.797 { 00:23:16.797 "name": "NVMe0", 00:23:16.797 "trtype": "tcp", 00:23:16.797 "traddr": "10.0.0.2", 00:23:16.797 "adrfam": "ipv4", 00:23:16.797 "trsvcid": "4420", 00:23:16.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.797 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:16.797 "hostaddr": "10.0.0.1", 00:23:16.797 "prchk_reftag": false, 00:23:16.797 "prchk_guard": false, 00:23:16.797 "hdgst": false, 00:23:16.797 "ddgst": false, 00:23:16.797 "allow_unrecognized_csi": false, 00:23:16.797 "method": "bdev_nvme_attach_controller", 00:23:16.797 "req_id": 1 00:23:16.797 } 00:23:16.797 Got JSON-RPC error response 00:23:16.797 response: 00:23:16.797 { 00:23:16.797 "code": -114, 00:23:16.797 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:16.797 } 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.797 request: 00:23:16.797 { 00:23:16.797 "name": "NVMe0", 00:23:16.797 "trtype": "tcp", 00:23:16.797 "traddr": "10.0.0.2", 00:23:16.797 "adrfam": "ipv4", 00:23:16.797 "trsvcid": "4420", 00:23:16.797 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.797 "hostaddr": "10.0.0.1", 00:23:16.797 "prchk_reftag": false, 00:23:16.797 "prchk_guard": false, 00:23:16.797 "hdgst": false, 00:23:16.797 "ddgst": false, 00:23:16.797 "allow_unrecognized_csi": false, 00:23:16.797 "method": "bdev_nvme_attach_controller", 00:23:16.797 "req_id": 1 00:23:16.797 } 00:23:16.797 Got JSON-RPC error response 00:23:16.797 response: 00:23:16.797 { 00:23:16.797 "code": -114, 00:23:16.797 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:16.797 } 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.797 request: 00:23:16.797 { 00:23:16.797 "name": "NVMe0", 00:23:16.797 "trtype": "tcp", 00:23:16.797 "traddr": "10.0.0.2", 00:23:16.797 "adrfam": "ipv4", 00:23:16.797 "trsvcid": "4420", 00:23:16.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.797 "hostaddr": "10.0.0.1", 00:23:16.797 "prchk_reftag": false, 00:23:16.797 "prchk_guard": false, 00:23:16.797 "hdgst": false, 00:23:16.797 "ddgst": false, 00:23:16.797 "multipath": "disable", 00:23:16.797 "allow_unrecognized_csi": false, 00:23:16.797 "method": "bdev_nvme_attach_controller", 00:23:16.797 "req_id": 1 00:23:16.797 } 00:23:16.797 Got JSON-RPC error response 00:23:16.797 response: 00:23:16.797 { 00:23:16.797 "code": -114, 00:23:16.797 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:16.797 } 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.797 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.798 request: 00:23:16.798 { 00:23:16.798 "name": "NVMe0", 00:23:16.798 "trtype": "tcp", 00:23:16.798 "traddr": "10.0.0.2", 00:23:16.798 "adrfam": "ipv4", 00:23:16.798 "trsvcid": "4420", 00:23:16.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.798 "hostaddr": "10.0.0.1", 00:23:16.798 "prchk_reftag": false, 00:23:16.798 "prchk_guard": false, 00:23:16.798 "hdgst": false, 00:23:16.798 "ddgst": false, 00:23:16.798 "multipath": "failover", 00:23:16.798 "allow_unrecognized_csi": false, 00:23:16.798 "method": "bdev_nvme_attach_controller", 00:23:16.798 "req_id": 1 00:23:16.798 } 00:23:16.798 Got JSON-RPC error response 00:23:16.798 response: 00:23:16.798 { 00:23:16.798 "code": -114, 00:23:16.798 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:16.798 } 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.798 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.059 NVMe0n1 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.059 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:17.059 11:05:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.442 { 00:23:18.442 "results": [ 00:23:18.442 { 00:23:18.442 "job": "NVMe0n1", 00:23:18.442 "core_mask": "0x1", 00:23:18.442 "workload": "write", 00:23:18.442 "status": "finished", 00:23:18.442 "queue_depth": 128, 00:23:18.442 "io_size": 4096, 00:23:18.442 "runtime": 1.006674, 00:23:18.442 "iops": 24736.905890089543, 00:23:18.442 "mibps": 96.62853863316228, 00:23:18.442 "io_failed": 0, 00:23:18.442 "io_timeout": 0, 00:23:18.442 "avg_latency_us": 5162.645146574572, 00:23:18.442 "min_latency_us": 2102.6133333333332, 00:23:18.442 "max_latency_us": 10922.666666666666 00:23:18.442 } 00:23:18.442 ], 00:23:18.442 "core_count": 1 00:23:18.442 } 00:23:18.442 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:18.442 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.442 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.442 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.442 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:18.442 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3335701 00:23:18.442 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3335701 ']' 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3335701 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3335701 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3335701' 00:23:18.443 killing process with pid 3335701 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3335701 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3335701 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:18.443 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:18.443 [2024-11-06 11:05:07.035409] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:18.443 [2024-11-06 11:05:07.035466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335701 ] 00:23:18.443 [2024-11-06 11:05:07.106383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.443 [2024-11-06 11:05:07.142886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.443 [2024-11-06 11:05:08.373377] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 45a282fd-98a2-4b2f-87f5-c48ee6a484bd already exists 00:23:18.443 [2024-11-06 11:05:08.373408] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:45a282fd-98a2-4b2f-87f5-c48ee6a484bd alias for bdev NVMe1n1 00:23:18.443 [2024-11-06 11:05:08.373416] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:18.443 Running I/O for 1 seconds... 00:23:18.443 24702.00 IOPS, 96.49 MiB/s 00:23:18.443 Latency(us) 00:23:18.443 [2024-11-06T10:05:09.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.443 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:18.443 NVMe0n1 : 1.01 24736.91 96.63 0.00 0.00 5162.65 2102.61 10922.67 00:23:18.443 [2024-11-06T10:05:09.865Z] =================================================================================================================== 00:23:18.443 [2024-11-06T10:05:09.865Z] Total : 24736.91 96.63 0.00 0.00 5162.65 2102.61 10922.67 00:23:18.443 Received shutdown signal, test time was about 1.000000 seconds 00:23:18.443 00:23:18.443 Latency(us) 00:23:18.443 [2024-11-06T10:05:09.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.443 [2024-11-06T10:05:09.865Z] =================================================================================================================== 00:23:18.443 [2024-11-06T10:05:09.865Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.443 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.443 rmmod nvme_tcp 00:23:18.443 rmmod nvme_fabrics 00:23:18.443 rmmod nvme_keyring 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3335352 ']' 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3335352 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3335352 ']' 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3335352 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:18.443 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3335352 00:23:18.703 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:18.703 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:18.703 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3335352' 00:23:18.703 killing process with pid 3335352 00:23:18.703 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3335352 00:23:18.703 11:05:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3335352 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.703 11:05:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.248 00:23:21.248 real 0m13.701s 00:23:21.248 user 0m16.795s 00:23:21.248 sys 0m6.360s 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.248 ************************************ 00:23:21.248 END TEST nvmf_multicontroller 00:23:21.248 ************************************ 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.248 ************************************ 00:23:21.248 START TEST nvmf_aer 00:23:21.248 ************************************ 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:21.248 * Looking for test storage... 00:23:21.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.248 --rc genhtml_branch_coverage=1 00:23:21.248 --rc genhtml_function_coverage=1 00:23:21.248 --rc genhtml_legend=1 00:23:21.248 --rc geninfo_all_blocks=1 00:23:21.248 --rc geninfo_unexecuted_blocks=1 00:23:21.248 00:23:21.248 ' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.248 --rc genhtml_branch_coverage=1 00:23:21.248 --rc genhtml_function_coverage=1 00:23:21.248 --rc genhtml_legend=1 00:23:21.248 --rc geninfo_all_blocks=1 00:23:21.248 --rc geninfo_unexecuted_blocks=1 00:23:21.248 00:23:21.248 ' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.248 --rc genhtml_branch_coverage=1 00:23:21.248 --rc genhtml_function_coverage=1 00:23:21.248 --rc genhtml_legend=1 00:23:21.248 --rc geninfo_all_blocks=1 00:23:21.248 --rc geninfo_unexecuted_blocks=1 00:23:21.248 00:23:21.248 ' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.248 --rc genhtml_branch_coverage=1 00:23:21.248 --rc genhtml_function_coverage=1 00:23:21.248 --rc genhtml_legend=1 00:23:21.248 --rc geninfo_all_blocks=1 00:23:21.248 --rc geninfo_unexecuted_blocks=1 00:23:21.248 00:23:21.248 ' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.248 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.249 11:05:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.389 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:29.390 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:29.390 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:29.390 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:29.390 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:29.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:23:29.390 00:23:29.390 --- 10.0.0.2 ping statistics --- 00:23:29.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.390 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:23:29.390 00:23:29.390 --- 10.0.0.1 ping statistics --- 00:23:29.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.390 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:29.390 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3340387 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3340387 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3340387 ']' 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 [2024-11-06 11:05:19.782903] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:29.391 [2024-11-06 11:05:19.782968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.391 [2024-11-06 11:05:19.862539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.391 [2024-11-06 11:05:19.898610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.391 [2024-11-06 11:05:19.898644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.391 [2024-11-06 11:05:19.898652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.391 [2024-11-06 11:05:19.898659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.391 [2024-11-06 11:05:19.898667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.391 [2024-11-06 11:05:19.900186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.391 [2024-11-06 11:05:19.900303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.391 [2024-11-06 11:05:19.900463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.391 [2024-11-06 11:05:19.900463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.391 11:05:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 [2024-11-06 11:05:20.028726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 Malloc0 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 [2024-11-06 11:05:20.097018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 [ 00:23:29.391 { 00:23:29.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:29.391 "subtype": "Discovery", 00:23:29.391 "listen_addresses": [], 00:23:29.391 "allow_any_host": true, 00:23:29.391 "hosts": [] 00:23:29.391 }, 00:23:29.391 { 00:23:29.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.391 "subtype": "NVMe", 00:23:29.391 "listen_addresses": [ 00:23:29.391 { 00:23:29.391 "trtype": "TCP", 00:23:29.391 "adrfam": "IPv4", 00:23:29.391 "traddr": "10.0.0.2", 00:23:29.391 "trsvcid": "4420" 00:23:29.391 } 00:23:29.391 ], 00:23:29.391 "allow_any_host": true, 00:23:29.391 "hosts": [], 00:23:29.391 "serial_number": "SPDK00000000000001", 00:23:29.391 "model_number": "SPDK bdev Controller", 00:23:29.391 "max_namespaces": 2, 00:23:29.391 "min_cntlid": 1, 00:23:29.391 "max_cntlid": 65519, 00:23:29.391 "namespaces": [ 00:23:29.391 { 00:23:29.391 "nsid": 1, 00:23:29.391 "bdev_name": "Malloc0", 00:23:29.391 "name": "Malloc0", 00:23:29.391 "nguid": "A8244287B8904E99A65C25F980D25E7E", 00:23:29.391 "uuid": "a8244287-b890-4e99-a65c-25f980d25e7e" 00:23:29.391 } 00:23:29.391 ] 00:23:29.391 } 00:23:29.391 ] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3340417 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 Malloc1 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.391 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.392 Asynchronous Event Request test 00:23:29.392 Attaching to 10.0.0.2 00:23:29.392 Attached to 10.0.0.2 00:23:29.392 Registering asynchronous event callbacks... 00:23:29.392 Starting namespace attribute notice tests for all controllers... 00:23:29.392 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:29.392 aer_cb - Changed Namespace 00:23:29.392 Cleaning up... 00:23:29.392 [ 00:23:29.392 { 00:23:29.392 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:29.392 "subtype": "Discovery", 00:23:29.392 "listen_addresses": [], 00:23:29.392 "allow_any_host": true, 00:23:29.392 "hosts": [] 00:23:29.392 }, 00:23:29.392 { 00:23:29.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.392 "subtype": "NVMe", 00:23:29.392 "listen_addresses": [ 00:23:29.392 { 00:23:29.392 "trtype": "TCP", 00:23:29.392 "adrfam": "IPv4", 00:23:29.392 "traddr": "10.0.0.2", 00:23:29.392 "trsvcid": "4420" 00:23:29.392 } 00:23:29.392 ], 00:23:29.392 "allow_any_host": true, 00:23:29.392 "hosts": [], 00:23:29.392 "serial_number": "SPDK00000000000001", 00:23:29.392 "model_number": "SPDK bdev Controller", 00:23:29.392 "max_namespaces": 2, 00:23:29.392 "min_cntlid": 1, 00:23:29.392 "max_cntlid": 65519, 00:23:29.392 "namespaces": [ 00:23:29.392 { 00:23:29.392 "nsid": 1, 00:23:29.392 "bdev_name": "Malloc0", 00:23:29.392 "name": "Malloc0", 00:23:29.392 "nguid": "A8244287B8904E99A65C25F980D25E7E", 00:23:29.392 "uuid": "a8244287-b890-4e99-a65c-25f980d25e7e" 00:23:29.392 }, 00:23:29.392 { 00:23:29.392 "nsid": 2, 00:23:29.392 "bdev_name": "Malloc1", 00:23:29.392 "name": "Malloc1", 00:23:29.392 "nguid": "66C99A715097486AA0AB28F0F56DAC14", 00:23:29.392 "uuid": "66c99a71-5097-486a-a0ab-28f0f56dac14" 00:23:29.392 } 00:23:29.392 ] 00:23:29.392 } 00:23:29.392 ] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3340417 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.392 rmmod nvme_tcp 00:23:29.392 rmmod nvme_fabrics 00:23:29.392 rmmod nvme_keyring 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3340387 ']' 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3340387 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3340387 ']' 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3340387 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3340387 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3340387' 00:23:29.392 killing process with pid 3340387 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3340387 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3340387 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.392 11:05:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.938 00:23:31.938 real 0m10.607s 00:23:31.938 user 0m5.401s 00:23:31.938 sys 0m5.894s 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.938 ************************************ 00:23:31.938 END TEST nvmf_aer 00:23:31.938 ************************************ 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.938 ************************************ 00:23:31.938 START TEST nvmf_async_init 00:23:31.938 ************************************ 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:31.938 * Looking for test storage... 00:23:31.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:31.938 11:05:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.938 --rc genhtml_branch_coverage=1 00:23:31.938 --rc genhtml_function_coverage=1 00:23:31.938 --rc genhtml_legend=1 00:23:31.938 --rc geninfo_all_blocks=1 00:23:31.938 --rc geninfo_unexecuted_blocks=1 00:23:31.938 00:23:31.938 ' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.938 --rc genhtml_branch_coverage=1 00:23:31.938 --rc genhtml_function_coverage=1 00:23:31.938 --rc genhtml_legend=1 00:23:31.938 --rc geninfo_all_blocks=1 00:23:31.938 --rc geninfo_unexecuted_blocks=1 00:23:31.938 00:23:31.938 ' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.938 --rc genhtml_branch_coverage=1 00:23:31.938 --rc genhtml_function_coverage=1 00:23:31.938 --rc genhtml_legend=1 00:23:31.938 --rc geninfo_all_blocks=1 00:23:31.938 --rc geninfo_unexecuted_blocks=1 00:23:31.938 00:23:31.938 ' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.938 --rc genhtml_branch_coverage=1 00:23:31.938 --rc genhtml_function_coverage=1 00:23:31.938 --rc genhtml_legend=1 00:23:31.938 --rc geninfo_all_blocks=1 00:23:31.938 --rc geninfo_unexecuted_blocks=1 00:23:31.938 00:23:31.938 ' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.938 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6c5b18a629a34bcdb7ef95abc53325b0 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.939 11:05:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:40.081 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.081 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:40.081 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:40.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:40.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:23:40.082 00:23:40.082 --- 10.0.0.2 ping statistics --- 00:23:40.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.082 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:40.082 00:23:40.082 --- 10.0.0.1 ping statistics --- 00:23:40.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.082 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3344742 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3344742 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3344742 ']' 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:40.082 11:05:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.082 [2024-11-06 11:05:30.497193] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:40.082 [2024-11-06 11:05:30.497250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.082 [2024-11-06 11:05:30.577544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.082 [2024-11-06 11:05:30.614957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.082 [2024-11-06 11:05:30.614995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.082 [2024-11-06 11:05:30.615003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.082 [2024-11-06 11:05:30.615010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.082 [2024-11-06 11:05:30.615016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.082 [2024-11-06 11:05:30.615597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.082 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:40.082 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:23:40.082 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.082 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.082 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.083 [2024-11-06 11:05:31.348544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.083 null0 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6c5b18a629a34bcdb7ef95abc53325b0 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.083 [2024-11-06 11:05:31.408853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.083 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.343 nvme0n1 00:23:40.343 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.343 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.343 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.343 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.343 [ 00:23:40.343 { 00:23:40.343 "name": "nvme0n1", 00:23:40.343 "aliases": [ 00:23:40.343 "6c5b18a6-29a3-4bcd-b7ef-95abc53325b0" 00:23:40.343 ], 00:23:40.343 "product_name": "NVMe disk", 00:23:40.343 "block_size": 512, 00:23:40.343 "num_blocks": 2097152, 00:23:40.343 "uuid": "6c5b18a6-29a3-4bcd-b7ef-95abc53325b0", 00:23:40.343 "numa_id": 0, 00:23:40.343 "assigned_rate_limits": { 00:23:40.343 "rw_ios_per_sec": 0, 00:23:40.343 "rw_mbytes_per_sec": 0, 00:23:40.343 "r_mbytes_per_sec": 0, 00:23:40.343 "w_mbytes_per_sec": 0 00:23:40.343 }, 00:23:40.343 "claimed": false, 00:23:40.344 "zoned": false, 00:23:40.344 "supported_io_types": { 00:23:40.344 "read": true, 00:23:40.344 "write": true, 00:23:40.344 "unmap": false, 00:23:40.344 "flush": true, 00:23:40.344 "reset": true, 00:23:40.344 "nvme_admin": true, 00:23:40.344 "nvme_io": true, 00:23:40.344 "nvme_io_md": false, 00:23:40.344 "write_zeroes": true, 00:23:40.344 "zcopy": false, 00:23:40.344 "get_zone_info": false, 00:23:40.344 "zone_management": false, 00:23:40.344 "zone_append": false, 00:23:40.344 "compare": true, 00:23:40.344 "compare_and_write": true, 00:23:40.344 "abort": true, 00:23:40.344 "seek_hole": false, 00:23:40.344 "seek_data": false, 00:23:40.344 "copy": true, 00:23:40.344 "nvme_iov_md": false 00:23:40.344 }, 00:23:40.344 "memory_domains": [ 00:23:40.344 { 00:23:40.344 "dma_device_id": "system", 00:23:40.344 "dma_device_type": 1 00:23:40.344 } 00:23:40.344 ], 00:23:40.344 "driver_specific": { 00:23:40.344 "nvme": [ 00:23:40.344 { 00:23:40.344 "trid": { 00:23:40.344 "trtype": "TCP", 00:23:40.344 "adrfam": "IPv4", 00:23:40.344 "traddr": "10.0.0.2", 00:23:40.344 "trsvcid": "4420", 00:23:40.344 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.344 }, 00:23:40.344 "ctrlr_data": { 00:23:40.344 "cntlid": 1, 00:23:40.344 "vendor_id": "0x8086", 00:23:40.344 "model_number": "SPDK bdev Controller", 00:23:40.344 "serial_number": "00000000000000000000", 00:23:40.344 "firmware_revision": "25.01", 00:23:40.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.344 "oacs": { 00:23:40.344 "security": 0, 00:23:40.344 "format": 0, 00:23:40.344 "firmware": 0, 00:23:40.344 "ns_manage": 0 00:23:40.344 }, 00:23:40.344 "multi_ctrlr": true, 00:23:40.344 "ana_reporting": false 00:23:40.344 }, 00:23:40.344 "vs": { 00:23:40.344 "nvme_version": "1.3" 00:23:40.344 }, 00:23:40.344 "ns_data": { 00:23:40.344 "id": 1, 00:23:40.344 "can_share": true 00:23:40.344 } 00:23:40.344 } 00:23:40.344 ], 00:23:40.344 "mp_policy": "active_passive" 00:23:40.344 } 00:23:40.344 } 00:23:40.344 ] 00:23:40.344 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.344 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:40.344 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.344 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.344 [2024-11-06 11:05:31.683085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.344 [2024-11-06 11:05:31.683148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ecf60 (9): Bad file descriptor 00:23:40.605 [2024-11-06 11:05:31.814841] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.605 [ 00:23:40.605 { 00:23:40.605 "name": "nvme0n1", 00:23:40.605 "aliases": [ 00:23:40.605 "6c5b18a6-29a3-4bcd-b7ef-95abc53325b0" 00:23:40.605 ], 00:23:40.605 "product_name": "NVMe disk", 00:23:40.605 "block_size": 512, 00:23:40.605 "num_blocks": 2097152, 00:23:40.605 "uuid": "6c5b18a6-29a3-4bcd-b7ef-95abc53325b0", 00:23:40.605 "numa_id": 0, 00:23:40.605 "assigned_rate_limits": { 00:23:40.605 "rw_ios_per_sec": 0, 00:23:40.605 "rw_mbytes_per_sec": 0, 00:23:40.605 "r_mbytes_per_sec": 0, 00:23:40.605 "w_mbytes_per_sec": 0 00:23:40.605 }, 00:23:40.605 "claimed": false, 00:23:40.605 "zoned": false, 00:23:40.605 "supported_io_types": { 00:23:40.605 "read": true, 00:23:40.605 "write": true, 00:23:40.605 "unmap": false, 00:23:40.605 "flush": true, 00:23:40.605 "reset": true, 00:23:40.605 "nvme_admin": true, 00:23:40.605 "nvme_io": true, 00:23:40.605 "nvme_io_md": false, 00:23:40.605 "write_zeroes": true, 00:23:40.605 "zcopy": false, 00:23:40.605 "get_zone_info": false, 00:23:40.605 "zone_management": false, 00:23:40.605 "zone_append": false, 00:23:40.605 "compare": true, 00:23:40.605 "compare_and_write": true, 00:23:40.605 "abort": true, 00:23:40.605 "seek_hole": false, 00:23:40.605 "seek_data": false, 00:23:40.605 "copy": true, 00:23:40.605 "nvme_iov_md": false 00:23:40.605 }, 00:23:40.605 "memory_domains": [ 00:23:40.605 { 00:23:40.605 "dma_device_id": "system", 00:23:40.605 "dma_device_type": 1 00:23:40.605 } 00:23:40.605 ], 00:23:40.605 "driver_specific": { 00:23:40.605 "nvme": [ 00:23:40.605 { 00:23:40.605 "trid": { 00:23:40.605 "trtype": "TCP", 00:23:40.605 "adrfam": "IPv4", 00:23:40.605 "traddr": "10.0.0.2", 00:23:40.605 "trsvcid": "4420", 00:23:40.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.605 }, 00:23:40.605 "ctrlr_data": { 00:23:40.605 "cntlid": 2, 00:23:40.605 "vendor_id": "0x8086", 00:23:40.605 "model_number": "SPDK bdev Controller", 00:23:40.605 "serial_number": "00000000000000000000", 00:23:40.605 "firmware_revision": "25.01", 00:23:40.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.605 "oacs": { 00:23:40.605 "security": 0, 00:23:40.605 "format": 0, 00:23:40.605 "firmware": 0, 00:23:40.605 "ns_manage": 0 00:23:40.605 }, 00:23:40.605 "multi_ctrlr": true, 00:23:40.605 "ana_reporting": false 00:23:40.605 }, 00:23:40.605 "vs": { 00:23:40.605 "nvme_version": "1.3" 00:23:40.605 }, 00:23:40.605 "ns_data": { 00:23:40.605 "id": 1, 00:23:40.605 "can_share": true 00:23:40.605 } 00:23:40.605 } 00:23:40.605 ], 00:23:40.605 "mp_policy": "active_passive" 00:23:40.605 } 00:23:40.605 } 00:23:40.605 ] 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:40.605 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Ya6J783sop 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Ya6J783sop 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Ya6J783sop 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.606 [2024-11-06 11:05:31.899814] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.606 [2024-11-06 11:05:31.899925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.606 [2024-11-06 11:05:31.923889] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.606 nvme0n1 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.606 11:05:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.606 [ 00:23:40.606 { 00:23:40.606 "name": "nvme0n1", 00:23:40.606 "aliases": [ 00:23:40.606 "6c5b18a6-29a3-4bcd-b7ef-95abc53325b0" 00:23:40.606 ], 00:23:40.606 "product_name": "NVMe disk", 00:23:40.606 "block_size": 512, 00:23:40.606 "num_blocks": 2097152, 00:23:40.606 "uuid": "6c5b18a6-29a3-4bcd-b7ef-95abc53325b0", 00:23:40.606 "numa_id": 0, 00:23:40.606 "assigned_rate_limits": { 00:23:40.606 "rw_ios_per_sec": 0, 00:23:40.606 "rw_mbytes_per_sec": 0, 00:23:40.606 "r_mbytes_per_sec": 0, 00:23:40.606 "w_mbytes_per_sec": 0 00:23:40.606 }, 00:23:40.606 "claimed": false, 00:23:40.606 "zoned": false, 00:23:40.606 "supported_io_types": { 00:23:40.606 "read": true, 00:23:40.606 "write": true, 00:23:40.606 "unmap": false, 00:23:40.606 "flush": true, 00:23:40.606 "reset": true, 00:23:40.606 "nvme_admin": true, 00:23:40.606 "nvme_io": true, 00:23:40.606 "nvme_io_md": false, 00:23:40.606 "write_zeroes": true, 00:23:40.606 "zcopy": false, 00:23:40.606 "get_zone_info": false, 00:23:40.606 "zone_management": false, 00:23:40.606 "zone_append": false, 00:23:40.606 "compare": true, 00:23:40.606 "compare_and_write": true, 00:23:40.606 "abort": true, 00:23:40.606 "seek_hole": false, 00:23:40.606 "seek_data": false, 00:23:40.606 "copy": true, 00:23:40.606 "nvme_iov_md": false 00:23:40.606 }, 00:23:40.606 "memory_domains": [ 00:23:40.606 { 00:23:40.606 "dma_device_id": "system", 00:23:40.606 "dma_device_type": 1 00:23:40.606 } 00:23:40.606 ], 00:23:40.606 "driver_specific": { 00:23:40.606 "nvme": [ 00:23:40.606 { 00:23:40.606 "trid": { 00:23:40.606 "trtype": "TCP", 00:23:40.606 "adrfam": "IPv4", 00:23:40.606 "traddr": "10.0.0.2", 00:23:40.606 "trsvcid": "4421", 00:23:40.606 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.606 }, 00:23:40.606 "ctrlr_data": { 00:23:40.606 "cntlid": 3, 00:23:40.606 "vendor_id": "0x8086", 00:23:40.606 "model_number": "SPDK bdev Controller", 00:23:40.606 "serial_number": "00000000000000000000", 00:23:40.606 "firmware_revision": "25.01", 00:23:40.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.606 "oacs": { 00:23:40.606 "security": 0, 00:23:40.606 "format": 0, 00:23:40.606 "firmware": 0, 00:23:40.606 "ns_manage": 0 00:23:40.606 }, 00:23:40.606 "multi_ctrlr": true, 00:23:40.606 "ana_reporting": false 00:23:40.606 }, 00:23:40.606 "vs": { 00:23:40.606 "nvme_version": "1.3" 00:23:40.606 }, 00:23:40.606 "ns_data": { 00:23:40.606 "id": 1, 00:23:40.606 "can_share": true 00:23:40.606 } 00:23:40.606 } 00:23:40.606 ], 00:23:40.606 "mp_policy": "active_passive" 00:23:40.606 } 00:23:40.606 } 00:23:40.606 ] 00:23:40.606 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.606 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.606 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.606 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Ya6J783sop 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.866 rmmod nvme_tcp 00:23:40.866 rmmod nvme_fabrics 00:23:40.866 rmmod nvme_keyring 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3344742 ']' 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3344742 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3344742 ']' 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3344742 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3344742 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3344742' 00:23:40.866 killing process with pid 3344742 00:23:40.866 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3344742 00:23:40.867 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3344742 00:23:40.867 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.867 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.867 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.867 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.128 11:05:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.041 00:23:43.041 real 0m11.480s 00:23:43.041 user 0m4.152s 00:23:43.041 sys 0m5.880s 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.041 ************************************ 00:23:43.041 END TEST nvmf_async_init 00:23:43.041 ************************************ 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.041 ************************************ 00:23:43.041 START TEST dma 00:23:43.041 ************************************ 00:23:43.041 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:43.302 * Looking for test storage... 00:23:43.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:43.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.302 --rc genhtml_branch_coverage=1 00:23:43.302 --rc genhtml_function_coverage=1 00:23:43.302 --rc genhtml_legend=1 00:23:43.302 --rc geninfo_all_blocks=1 00:23:43.302 --rc geninfo_unexecuted_blocks=1 00:23:43.302 00:23:43.302 ' 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:43.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.302 --rc genhtml_branch_coverage=1 00:23:43.302 --rc genhtml_function_coverage=1 00:23:43.302 --rc genhtml_legend=1 00:23:43.302 --rc geninfo_all_blocks=1 00:23:43.302 --rc geninfo_unexecuted_blocks=1 00:23:43.302 00:23:43.302 ' 00:23:43.302 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:43.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.302 --rc genhtml_branch_coverage=1 00:23:43.302 --rc genhtml_function_coverage=1 00:23:43.302 --rc genhtml_legend=1 00:23:43.302 --rc geninfo_all_blocks=1 00:23:43.302 --rc geninfo_unexecuted_blocks=1 00:23:43.302 00:23:43.302 ' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:43.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.303 --rc genhtml_branch_coverage=1 00:23:43.303 --rc genhtml_function_coverage=1 00:23:43.303 --rc genhtml_legend=1 00:23:43.303 --rc geninfo_all_blocks=1 00:23:43.303 --rc geninfo_unexecuted_blocks=1 00:23:43.303 00:23:43.303 ' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:43.303 00:23:43.303 real 0m0.191s 00:23:43.303 user 0m0.108s 00:23:43.303 sys 0m0.090s 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:43.303 ************************************ 00:23:43.303 END TEST dma 00:23:43.303 ************************************ 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.303 ************************************ 00:23:43.303 START TEST nvmf_identify 00:23:43.303 ************************************ 00:23:43.303 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.565 * Looking for test storage... 00:23:43.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.565 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.566 --rc genhtml_branch_coverage=1 00:23:43.566 --rc genhtml_function_coverage=1 00:23:43.566 --rc genhtml_legend=1 00:23:43.566 --rc geninfo_all_blocks=1 00:23:43.566 --rc geninfo_unexecuted_blocks=1 00:23:43.566 00:23:43.566 ' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.566 --rc genhtml_branch_coverage=1 00:23:43.566 --rc genhtml_function_coverage=1 00:23:43.566 --rc genhtml_legend=1 00:23:43.566 --rc geninfo_all_blocks=1 00:23:43.566 --rc geninfo_unexecuted_blocks=1 00:23:43.566 00:23:43.566 ' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.566 --rc genhtml_branch_coverage=1 00:23:43.566 --rc genhtml_function_coverage=1 00:23:43.566 --rc genhtml_legend=1 00:23:43.566 --rc geninfo_all_blocks=1 00:23:43.566 --rc geninfo_unexecuted_blocks=1 00:23:43.566 00:23:43.566 ' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:43.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.566 --rc genhtml_branch_coverage=1 00:23:43.566 --rc genhtml_function_coverage=1 00:23:43.566 --rc genhtml_legend=1 00:23:43.566 --rc geninfo_all_blocks=1 00:23:43.566 --rc geninfo_unexecuted_blocks=1 00:23:43.566 00:23:43.566 ' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.566 11:05:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:51.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:51.708 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:51.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.708 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:51.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:23:51.709 00:23:51.709 --- 10.0.0.2 ping statistics --- 00:23:51.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.709 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:23:51.709 00:23:51.709 --- 10.0.0.1 ping statistics --- 00:23:51.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.709 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3349302 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3349302 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3349302 ']' 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:51.709 11:05:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.709 [2024-11-06 11:05:42.436143] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:51.709 [2024-11-06 11:05:42.436218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.709 [2024-11-06 11:05:42.520164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.709 [2024-11-06 11:05:42.563143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.709 [2024-11-06 11:05:42.563182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.709 [2024-11-06 11:05:42.563190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.709 [2024-11-06 11:05:42.563196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.709 [2024-11-06 11:05:42.563203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.709 [2024-11-06 11:05:42.564782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.709 [2024-11-06 11:05:42.565009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.709 [2024-11-06 11:05:42.565009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.709 [2024-11-06 11:05:42.564861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.970 [2024-11-06 11:05:43.247854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.970 Malloc0 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.970 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.971 [2024-11-06 11:05:43.355016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.971 [ 00:23:51.971 { 00:23:51.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:51.971 "subtype": "Discovery", 00:23:51.971 "listen_addresses": [ 00:23:51.971 { 00:23:51.971 "trtype": "TCP", 00:23:51.971 "adrfam": "IPv4", 00:23:51.971 "traddr": "10.0.0.2", 00:23:51.971 "trsvcid": "4420" 00:23:51.971 } 00:23:51.971 ], 00:23:51.971 "allow_any_host": true, 00:23:51.971 "hosts": [] 00:23:51.971 }, 00:23:51.971 { 00:23:51.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.971 "subtype": "NVMe", 00:23:51.971 "listen_addresses": [ 00:23:51.971 { 00:23:51.971 "trtype": "TCP", 00:23:51.971 "adrfam": "IPv4", 00:23:51.971 "traddr": "10.0.0.2", 00:23:51.971 "trsvcid": "4420" 00:23:51.971 } 00:23:51.971 ], 00:23:51.971 "allow_any_host": true, 00:23:51.971 "hosts": [], 00:23:51.971 "serial_number": "SPDK00000000000001", 00:23:51.971 "model_number": "SPDK bdev Controller", 00:23:51.971 "max_namespaces": 32, 00:23:51.971 "min_cntlid": 1, 00:23:51.971 "max_cntlid": 65519, 00:23:51.971 "namespaces": [ 00:23:51.971 { 00:23:51.971 "nsid": 1, 00:23:51.971 "bdev_name": "Malloc0", 00:23:51.971 "name": "Malloc0", 00:23:51.971 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:51.971 "eui64": "ABCDEF0123456789", 00:23:51.971 "uuid": "4ab341cf-c6eb-4690-b2e9-cf587d3dfa96" 00:23:51.971 } 00:23:51.971 ] 00:23:51.971 } 00:23:51.971 ] 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.971 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:52.233 [2024-11-06 11:05:43.418700] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:52.233 [2024-11-06 11:05:43.418781] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349499 ] 00:23:52.233 [2024-11-06 11:05:43.474915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:52.233 [2024-11-06 11:05:43.474970] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:52.233 [2024-11-06 11:05:43.474976] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:52.233 [2024-11-06 11:05:43.474989] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:52.233 [2024-11-06 11:05:43.475000] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:52.233 [2024-11-06 11:05:43.475693] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:52.233 [2024-11-06 11:05:43.475729] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe79690 0 00:23:52.233 [2024-11-06 11:05:43.481761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:52.233 [2024-11-06 11:05:43.481774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:52.233 [2024-11-06 11:05:43.481780] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:52.233 [2024-11-06 11:05:43.481784] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:52.233 [2024-11-06 11:05:43.481816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.481822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.481826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.233 [2024-11-06 11:05:43.481840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:52.233 [2024-11-06 11:05:43.481857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.233 [2024-11-06 11:05:43.489757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.233 [2024-11-06 11:05:43.489767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.233 [2024-11-06 11:05:43.489771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.489775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.233 [2024-11-06 11:05:43.489785] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:52.233 [2024-11-06 11:05:43.489792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:52.233 [2024-11-06 11:05:43.489798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:52.233 [2024-11-06 11:05:43.489813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.489817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.489821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.233 [2024-11-06 11:05:43.489828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.233 [2024-11-06 11:05:43.489842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.233 [2024-11-06 11:05:43.490015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.233 [2024-11-06 11:05:43.490021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.233 [2024-11-06 11:05:43.490025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.490029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.233 [2024-11-06 11:05:43.490035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:52.233 [2024-11-06 11:05:43.490042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:52.233 [2024-11-06 11:05:43.490049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.490053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.233 [2024-11-06 11:05:43.490057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.233 [2024-11-06 11:05:43.490064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.233 [2024-11-06 11:05:43.490074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.233 [2024-11-06 11:05:43.490312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.234 [2024-11-06 11:05:43.490319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.234 [2024-11-06 11:05:43.490322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.490326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.234 [2024-11-06 11:05:43.490335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:52.234 [2024-11-06 11:05:43.490344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:52.234 [2024-11-06 11:05:43.490350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.490354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.490358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.490365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.234 [2024-11-06 11:05:43.490375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.234 [2024-11-06 11:05:43.490614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.234 [2024-11-06 11:05:43.490621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.234 [2024-11-06 11:05:43.490624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.490628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.234 [2024-11-06 11:05:43.490634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:52.234 [2024-11-06 11:05:43.490643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.490647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.490650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.490657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.234 [2024-11-06 11:05:43.490667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.234 [2024-11-06 11:05:43.490890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.234 [2024-11-06 11:05:43.490897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.234 [2024-11-06 11:05:43.490901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.490905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.234 [2024-11-06 11:05:43.490910] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:52.234 [2024-11-06 11:05:43.490915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:52.234 [2024-11-06 11:05:43.490922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:52.234 [2024-11-06 11:05:43.491033] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:52.234 [2024-11-06 11:05:43.491038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:52.234 [2024-11-06 11:05:43.491046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.491061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.234 [2024-11-06 11:05:43.491072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.234 [2024-11-06 11:05:43.491290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.234 [2024-11-06 11:05:43.491298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.234 [2024-11-06 11:05:43.491302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.234 [2024-11-06 11:05:43.491311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:52.234 [2024-11-06 11:05:43.491320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.491334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.234 [2024-11-06 11:05:43.491344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.234 [2024-11-06 11:05:43.491539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.234 [2024-11-06 11:05:43.491546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.234 [2024-11-06 11:05:43.491549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.234 [2024-11-06 11:05:43.491558] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:52.234 [2024-11-06 11:05:43.491563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:52.234 [2024-11-06 11:05:43.491570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:52.234 [2024-11-06 11:05:43.491583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:52.234 [2024-11-06 11:05:43.491592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.491603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.234 [2024-11-06 11:05:43.491613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.234 [2024-11-06 11:05:43.491809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.234 [2024-11-06 11:05:43.491816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.234 [2024-11-06 11:05:43.491820] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491825] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe79690): datao=0, datal=4096, cccid=0 00:23:52.234 [2024-11-06 11:05:43.491829] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xedb100) on tqpair(0xe79690): expected_datao=0, payload_size=4096 00:23:52.234 [2024-11-06 11:05:43.491834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491850] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.491854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.536753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.234 [2024-11-06 11:05:43.536763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.234 [2024-11-06 11:05:43.536767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.536771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.234 [2024-11-06 11:05:43.536779] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:52.234 [2024-11-06 11:05:43.536787] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:52.234 [2024-11-06 11:05:43.536792] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:52.234 [2024-11-06 11:05:43.536799] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:52.234 [2024-11-06 11:05:43.536804] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:52.234 [2024-11-06 11:05:43.536810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:52.234 [2024-11-06 11:05:43.536821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:52.234 [2024-11-06 11:05:43.536828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.536832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.536836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.536843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:52.234 [2024-11-06 11:05:43.536856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.234 [2024-11-06 11:05:43.537030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.234 [2024-11-06 11:05:43.537036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.234 [2024-11-06 11:05:43.537040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.234 [2024-11-06 11:05:43.537051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.537065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.234 [2024-11-06 11:05:43.537071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.537084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.234 [2024-11-06 11:05:43.537091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe79690) 00:23:52.234 [2024-11-06 11:05:43.537104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.234 [2024-11-06 11:05:43.537110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.234 [2024-11-06 11:05:43.537117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.235 [2024-11-06 11:05:43.537123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.235 [2024-11-06 11:05:43.537128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:52.235 [2024-11-06 11:05:43.537139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:52.235 [2024-11-06 11:05:43.537145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe79690) 00:23:52.235 [2024-11-06 11:05:43.537156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.235 [2024-11-06 11:05:43.537168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb100, cid 0, qid 0 00:23:52.235 [2024-11-06 11:05:43.537173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb280, cid 1, qid 0 00:23:52.235 [2024-11-06 11:05:43.537178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb400, cid 2, qid 0 00:23:52.235 [2024-11-06 11:05:43.537182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.235 [2024-11-06 11:05:43.537187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb700, cid 4, qid 0 00:23:52.235 [2024-11-06 11:05:43.537416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.235 [2024-11-06 11:05:43.537422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.235 [2024-11-06 11:05:43.537426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb700) on tqpair=0xe79690 00:23:52.235 [2024-11-06 11:05:43.537438] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:52.235 [2024-11-06 11:05:43.537443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:52.235 [2024-11-06 11:05:43.537454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe79690) 00:23:52.235 [2024-11-06 11:05:43.537465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.235 [2024-11-06 11:05:43.537475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb700, cid 4, qid 0 00:23:52.235 [2024-11-06 11:05:43.537709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.235 [2024-11-06 11:05:43.537716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.235 [2024-11-06 11:05:43.537719] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537723] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe79690): datao=0, datal=4096, cccid=4 00:23:52.235 [2024-11-06 11:05:43.537728] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xedb700) on tqpair(0xe79690): expected_datao=0, payload_size=4096 00:23:52.235 [2024-11-06 11:05:43.537732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537739] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537743] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.235 [2024-11-06 11:05:43.537919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.235 [2024-11-06 11:05:43.537922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb700) on tqpair=0xe79690 00:23:52.235 [2024-11-06 11:05:43.537938] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:52.235 [2024-11-06 11:05:43.537960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe79690) 00:23:52.235 [2024-11-06 11:05:43.537973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.235 [2024-11-06 11:05:43.537980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.537987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe79690) 00:23:52.235 [2024-11-06 11:05:43.537994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.235 [2024-11-06 11:05:43.538008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb700, cid 4, qid 0 00:23:52.235 [2024-11-06 11:05:43.538013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb880, cid 5, qid 0 00:23:52.235 [2024-11-06 11:05:43.538260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.235 [2024-11-06 11:05:43.538267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.235 [2024-11-06 11:05:43.538270] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.538274] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe79690): datao=0, datal=1024, cccid=4 00:23:52.235 [2024-11-06 11:05:43.538278] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xedb700) on tqpair(0xe79690): expected_datao=0, payload_size=1024 00:23:52.235 [2024-11-06 11:05:43.538283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.538289] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.538293] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.538299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.235 [2024-11-06 11:05:43.538305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.235 [2024-11-06 11:05:43.538308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.538312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb880) on tqpair=0xe79690 00:23:52.235 [2024-11-06 11:05:43.578931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.235 [2024-11-06 11:05:43.578941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.235 [2024-11-06 11:05:43.578945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.578949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb700) on tqpair=0xe79690 00:23:52.235 [2024-11-06 11:05:43.578960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.578964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe79690) 00:23:52.235 [2024-11-06 11:05:43.578971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.235 [2024-11-06 11:05:43.578986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb700, cid 4, qid 0 00:23:52.235 [2024-11-06 11:05:43.579216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.235 [2024-11-06 11:05:43.579222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.235 [2024-11-06 11:05:43.579226] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.579230] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe79690): datao=0, datal=3072, cccid=4 00:23:52.235 [2024-11-06 11:05:43.579234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xedb700) on tqpair(0xe79690): expected_datao=0, payload_size=3072 00:23:52.235 [2024-11-06 11:05:43.579238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.579254] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.579258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.619912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.235 [2024-11-06 11:05:43.619924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.235 [2024-11-06 11:05:43.619928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.619932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb700) on tqpair=0xe79690 00:23:52.235 [2024-11-06 11:05:43.619941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.619945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe79690) 00:23:52.235 [2024-11-06 11:05:43.619952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.235 [2024-11-06 11:05:43.619966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb700, cid 4, qid 0 00:23:52.235 [2024-11-06 11:05:43.620224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.235 [2024-11-06 11:05:43.620230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.235 [2024-11-06 11:05:43.620234] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.620237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe79690): datao=0, datal=8, cccid=4 00:23:52.235 [2024-11-06 11:05:43.620242] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xedb700) on tqpair(0xe79690): expected_datao=0, payload_size=8 00:23:52.235 [2024-11-06 11:05:43.620246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.620253] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.235 [2024-11-06 11:05:43.620256] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.499 [2024-11-06 11:05:43.663754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.499 [2024-11-06 11:05:43.663764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.499 [2024-11-06 11:05:43.663768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.499 [2024-11-06 11:05:43.663772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb700) on tqpair=0xe79690 00:23:52.499 ===================================================== 00:23:52.499 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:52.499 ===================================================== 00:23:52.499 Controller Capabilities/Features 00:23:52.499 ================================ 00:23:52.499 Vendor ID: 0000 00:23:52.499 Subsystem Vendor ID: 0000 00:23:52.499 Serial Number: .................... 00:23:52.499 Model Number: ........................................ 00:23:52.499 Firmware Version: 25.01 00:23:52.499 Recommended Arb Burst: 0 00:23:52.499 IEEE OUI Identifier: 00 00 00 00:23:52.499 Multi-path I/O 00:23:52.499 May have multiple subsystem ports: No 00:23:52.499 May have multiple controllers: No 00:23:52.499 Associated with SR-IOV VF: No 00:23:52.499 Max Data Transfer Size: 131072 00:23:52.499 Max Number of Namespaces: 0 00:23:52.499 Max Number of I/O Queues: 1024 00:23:52.499 NVMe Specification Version (VS): 1.3 00:23:52.499 NVMe Specification Version (Identify): 1.3 00:23:52.499 Maximum Queue Entries: 128 00:23:52.499 Contiguous Queues Required: Yes 00:23:52.499 Arbitration Mechanisms Supported 00:23:52.499 Weighted Round Robin: Not Supported 00:23:52.499 Vendor Specific: Not Supported 00:23:52.499 Reset Timeout: 15000 ms 00:23:52.499 Doorbell Stride: 4 bytes 00:23:52.499 NVM Subsystem Reset: Not Supported 00:23:52.499 Command Sets Supported 00:23:52.499 NVM Command Set: Supported 00:23:52.499 Boot Partition: Not Supported 00:23:52.499 Memory Page Size Minimum: 4096 bytes 00:23:52.499 Memory Page Size Maximum: 4096 bytes 00:23:52.499 Persistent Memory Region: Not Supported 00:23:52.499 Optional Asynchronous Events Supported 00:23:52.499 Namespace Attribute Notices: Not Supported 00:23:52.499 Firmware Activation Notices: Not Supported 00:23:52.499 ANA Change Notices: Not Supported 00:23:52.499 PLE Aggregate Log Change Notices: Not Supported 00:23:52.499 LBA Status Info Alert Notices: Not Supported 00:23:52.499 EGE Aggregate Log Change Notices: Not Supported 00:23:52.499 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.499 Zone Descriptor Change Notices: Not Supported 00:23:52.499 Discovery Log Change Notices: Supported 00:23:52.499 Controller Attributes 00:23:52.499 128-bit Host Identifier: Not Supported 00:23:52.499 Non-Operational Permissive Mode: Not Supported 00:23:52.499 NVM Sets: Not Supported 00:23:52.499 Read Recovery Levels: Not Supported 00:23:52.499 Endurance Groups: Not Supported 00:23:52.499 Predictable Latency Mode: Not Supported 00:23:52.499 Traffic Based Keep ALive: Not Supported 00:23:52.499 Namespace Granularity: Not Supported 00:23:52.499 SQ Associations: Not Supported 00:23:52.499 UUID List: Not Supported 00:23:52.499 Multi-Domain Subsystem: Not Supported 00:23:52.499 Fixed Capacity Management: Not Supported 00:23:52.499 Variable Capacity Management: Not Supported 00:23:52.499 Delete Endurance Group: Not Supported 00:23:52.499 Delete NVM Set: Not Supported 00:23:52.499 Extended LBA Formats Supported: Not Supported 00:23:52.499 Flexible Data Placement Supported: Not Supported 00:23:52.499 00:23:52.499 Controller Memory Buffer Support 00:23:52.499 ================================ 00:23:52.499 Supported: No 00:23:52.499 00:23:52.499 Persistent Memory Region Support 00:23:52.499 ================================ 00:23:52.499 Supported: No 00:23:52.499 00:23:52.499 Admin Command Set Attributes 00:23:52.499 ============================ 00:23:52.499 Security Send/Receive: Not Supported 00:23:52.499 Format NVM: Not Supported 00:23:52.499 Firmware Activate/Download: Not Supported 00:23:52.499 Namespace Management: Not Supported 00:23:52.499 Device Self-Test: Not Supported 00:23:52.499 Directives: Not Supported 00:23:52.499 NVMe-MI: Not Supported 00:23:52.499 Virtualization Management: Not Supported 00:23:52.499 Doorbell Buffer Config: Not Supported 00:23:52.499 Get LBA Status Capability: Not Supported 00:23:52.499 Command & Feature Lockdown Capability: Not Supported 00:23:52.499 Abort Command Limit: 1 00:23:52.499 Async Event Request Limit: 4 00:23:52.499 Number of Firmware Slots: N/A 00:23:52.499 Firmware Slot 1 Read-Only: N/A 00:23:52.499 Firmware Activation Without Reset: N/A 00:23:52.499 Multiple Update Detection Support: N/A 00:23:52.499 Firmware Update Granularity: No Information Provided 00:23:52.499 Per-Namespace SMART Log: No 00:23:52.499 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.499 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:52.499 Command Effects Log Page: Not Supported 00:23:52.499 Get Log Page Extended Data: Supported 00:23:52.499 Telemetry Log Pages: Not Supported 00:23:52.499 Persistent Event Log Pages: Not Supported 00:23:52.499 Supported Log Pages Log Page: May Support 00:23:52.499 Commands Supported & Effects Log Page: Not Supported 00:23:52.499 Feature Identifiers & Effects Log Page:May Support 00:23:52.499 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.499 Data Area 4 for Telemetry Log: Not Supported 00:23:52.499 Error Log Page Entries Supported: 128 00:23:52.499 Keep Alive: Not Supported 00:23:52.499 00:23:52.499 NVM Command Set Attributes 00:23:52.499 ========================== 00:23:52.499 Submission Queue Entry Size 00:23:52.499 Max: 1 00:23:52.499 Min: 1 00:23:52.499 Completion Queue Entry Size 00:23:52.499 Max: 1 00:23:52.499 Min: 1 00:23:52.499 Number of Namespaces: 0 00:23:52.499 Compare Command: Not Supported 00:23:52.499 Write Uncorrectable Command: Not Supported 00:23:52.499 Dataset Management Command: Not Supported 00:23:52.499 Write Zeroes Command: Not Supported 00:23:52.499 Set Features Save Field: Not Supported 00:23:52.499 Reservations: Not Supported 00:23:52.499 Timestamp: Not Supported 00:23:52.499 Copy: Not Supported 00:23:52.499 Volatile Write Cache: Not Present 00:23:52.499 Atomic Write Unit (Normal): 1 00:23:52.499 Atomic Write Unit (PFail): 1 00:23:52.499 Atomic Compare & Write Unit: 1 00:23:52.499 Fused Compare & Write: Supported 00:23:52.499 Scatter-Gather List 00:23:52.499 SGL Command Set: Supported 00:23:52.499 SGL Keyed: Supported 00:23:52.499 SGL Bit Bucket Descriptor: Not Supported 00:23:52.499 SGL Metadata Pointer: Not Supported 00:23:52.499 Oversized SGL: Not Supported 00:23:52.499 SGL Metadata Address: Not Supported 00:23:52.499 SGL Offset: Supported 00:23:52.499 Transport SGL Data Block: Not Supported 00:23:52.499 Replay Protected Memory Block: Not Supported 00:23:52.499 00:23:52.499 Firmware Slot Information 00:23:52.499 ========================= 00:23:52.499 Active slot: 0 00:23:52.499 00:23:52.499 00:23:52.499 Error Log 00:23:52.499 ========= 00:23:52.499 00:23:52.499 Active Namespaces 00:23:52.499 ================= 00:23:52.499 Discovery Log Page 00:23:52.499 ================== 00:23:52.499 Generation Counter: 2 00:23:52.499 Number of Records: 2 00:23:52.499 Record Format: 0 00:23:52.499 00:23:52.499 Discovery Log Entry 0 00:23:52.499 ---------------------- 00:23:52.499 Transport Type: 3 (TCP) 00:23:52.499 Address Family: 1 (IPv4) 00:23:52.499 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:52.499 Entry Flags: 00:23:52.499 Duplicate Returned Information: 1 00:23:52.499 Explicit Persistent Connection Support for Discovery: 1 00:23:52.499 Transport Requirements: 00:23:52.499 Secure Channel: Not Required 00:23:52.499 Port ID: 0 (0x0000) 00:23:52.499 Controller ID: 65535 (0xffff) 00:23:52.499 Admin Max SQ Size: 128 00:23:52.499 Transport Service Identifier: 4420 00:23:52.499 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:52.499 Transport Address: 10.0.0.2 00:23:52.499 Discovery Log Entry 1 00:23:52.499 ---------------------- 00:23:52.499 Transport Type: 3 (TCP) 00:23:52.499 Address Family: 1 (IPv4) 00:23:52.499 Subsystem Type: 2 (NVM Subsystem) 00:23:52.499 Entry Flags: 00:23:52.499 Duplicate Returned Information: 0 00:23:52.499 Explicit Persistent Connection Support for Discovery: 0 00:23:52.499 Transport Requirements: 00:23:52.499 Secure Channel: Not Required 00:23:52.499 Port ID: 0 (0x0000) 00:23:52.499 Controller ID: 65535 (0xffff) 00:23:52.499 Admin Max SQ Size: 128 00:23:52.500 Transport Service Identifier: 4420 00:23:52.500 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:52.500 Transport Address: 10.0.0.2 [2024-11-06 11:05:43.663862] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:52.500 [2024-11-06 11:05:43.663873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb100) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.663880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.500 [2024-11-06 11:05:43.663885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb280) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.663890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.500 [2024-11-06 11:05:43.663895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb400) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.663900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.500 [2024-11-06 11:05:43.663905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.663909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.500 [2024-11-06 11:05:43.663918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.663922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.663926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.663933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.663947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.664185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.664192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.664197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.664211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.664225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.664238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.664435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.664442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.664445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.664454] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:52.500 [2024-11-06 11:05:43.664459] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:52.500 [2024-11-06 11:05:43.664468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.664483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.664493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.664736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.664743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.664751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.664764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.664779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.664789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.664962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.664968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.664972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.664985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.664993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.665000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.665010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.665190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.665196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.665200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.665213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.665228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.665238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.665444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.665450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.665454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.665467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.665481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.665491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.665694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.665700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.665703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.665717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.665731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.665741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.665930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.665936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.665940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.665953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.665960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.665967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.665978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.666199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.666207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.666211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.666214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.666224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.666228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.666232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.500 [2024-11-06 11:05:43.666238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.500 [2024-11-06 11:05:43.666249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.500 [2024-11-06 11:05:43.666451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.500 [2024-11-06 11:05:43.666457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.500 [2024-11-06 11:05:43.666461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.666465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.500 [2024-11-06 11:05:43.666474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.666478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.500 [2024-11-06 11:05:43.666482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.501 [2024-11-06 11:05:43.666489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.666499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.501 [2024-11-06 11:05:43.666705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.666711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.666714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.666718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.501 [2024-11-06 11:05:43.666727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.666731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.666735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.501 [2024-11-06 11:05:43.666742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.666755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.501 [2024-11-06 11:05:43.666913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.666919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.666923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.666926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.501 [2024-11-06 11:05:43.666936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.666940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.666943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.501 [2024-11-06 11:05:43.666950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.666961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.501 [2024-11-06 11:05:43.667158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.667164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.667170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.501 [2024-11-06 11:05:43.667183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.501 [2024-11-06 11:05:43.667197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.667208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.501 [2024-11-06 11:05:43.667408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.667414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.667417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.501 [2024-11-06 11:05:43.667430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.501 [2024-11-06 11:05:43.667445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.667455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.501 [2024-11-06 11:05:43.667661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.667667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.667670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.501 [2024-11-06 11:05:43.667684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.667691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe79690) 00:23:52.501 [2024-11-06 11:05:43.667698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.667708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xedb580, cid 3, qid 0 00:23:52.501 [2024-11-06 11:05:43.671754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.671762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.671765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.671769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xedb580) on tqpair=0xe79690 00:23:52.501 [2024-11-06 11:05:43.671777] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:23:52.501 00:23:52.501 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:52.501 [2024-11-06 11:05:43.716105] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:23:52.501 [2024-11-06 11:05:43.716146] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349511 ] 00:23:52.501 [2024-11-06 11:05:43.769799] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:52.501 [2024-11-06 11:05:43.769847] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:52.501 [2024-11-06 11:05:43.769852] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:52.501 [2024-11-06 11:05:43.769863] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:52.501 [2024-11-06 11:05:43.769874] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:52.501 [2024-11-06 11:05:43.773957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:52.501 [2024-11-06 11:05:43.773985] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10c6690 0 00:23:52.501 [2024-11-06 11:05:43.781758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:52.501 [2024-11-06 11:05:43.781771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:52.501 [2024-11-06 11:05:43.781775] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:52.501 [2024-11-06 11:05:43.781779] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:52.501 [2024-11-06 11:05:43.781809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.781815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.781819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.501 [2024-11-06 11:05:43.781830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:52.501 [2024-11-06 11:05:43.781847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.501 [2024-11-06 11:05:43.789758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.789768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.789772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.789777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.501 [2024-11-06 11:05:43.789788] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:52.501 [2024-11-06 11:05:43.789795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:52.501 [2024-11-06 11:05:43.789800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:52.501 [2024-11-06 11:05:43.789813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.789818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.789821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.501 [2024-11-06 11:05:43.789829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.789842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.501 [2024-11-06 11:05:43.790025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.790032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.790035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.790039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.501 [2024-11-06 11:05:43.790044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:52.501 [2024-11-06 11:05:43.790052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:52.501 [2024-11-06 11:05:43.790062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.790066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.790070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.501 [2024-11-06 11:05:43.790077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.501 [2024-11-06 11:05:43.790088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.501 [2024-11-06 11:05:43.790239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.501 [2024-11-06 11:05:43.790246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.501 [2024-11-06 11:05:43.790249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.501 [2024-11-06 11:05:43.790253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.501 [2024-11-06 11:05:43.790258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:52.501 [2024-11-06 11:05:43.790266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:52.501 [2024-11-06 11:05:43.790273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.790287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.502 [2024-11-06 11:05:43.790298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.502 [2024-11-06 11:05:43.790499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.502 [2024-11-06 11:05:43.790506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.502 [2024-11-06 11:05:43.790510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.502 [2024-11-06 11:05:43.790522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:52.502 [2024-11-06 11:05:43.790533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.790551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.502 [2024-11-06 11:05:43.790562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.502 [2024-11-06 11:05:43.790774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.502 [2024-11-06 11:05:43.790781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.502 [2024-11-06 11:05:43.790784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.502 [2024-11-06 11:05:43.790793] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:52.502 [2024-11-06 11:05:43.790798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:52.502 [2024-11-06 11:05:43.790806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:52.502 [2024-11-06 11:05:43.790914] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:52.502 [2024-11-06 11:05:43.790921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:52.502 [2024-11-06 11:05:43.790929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.790937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.790944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.502 [2024-11-06 11:05:43.790954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.502 [2024-11-06 11:05:43.791116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.502 [2024-11-06 11:05:43.791122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.502 [2024-11-06 11:05:43.791126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.502 [2024-11-06 11:05:43.791135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:52.502 [2024-11-06 11:05:43.791145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.791162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.502 [2024-11-06 11:05:43.791173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.502 [2024-11-06 11:05:43.791334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.502 [2024-11-06 11:05:43.791340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.502 [2024-11-06 11:05:43.791344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.502 [2024-11-06 11:05:43.791352] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:52.502 [2024-11-06 11:05:43.791357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:52.502 [2024-11-06 11:05:43.791366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:52.502 [2024-11-06 11:05:43.791374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:52.502 [2024-11-06 11:05:43.791382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.791395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.502 [2024-11-06 11:05:43.791406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.502 [2024-11-06 11:05:43.791575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.502 [2024-11-06 11:05:43.791583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.502 [2024-11-06 11:05:43.791586] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=0 00:23:52.502 [2024-11-06 11:05:43.791599] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128100) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:23:52.502 [2024-11-06 11:05:43.791605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791638] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.791642] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.831898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.502 [2024-11-06 11:05:43.831908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.502 [2024-11-06 11:05:43.831911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.831915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.502 [2024-11-06 11:05:43.831923] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:52.502 [2024-11-06 11:05:43.831928] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:52.502 [2024-11-06 11:05:43.831933] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:52.502 [2024-11-06 11:05:43.831944] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:52.502 [2024-11-06 11:05:43.831949] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:52.502 [2024-11-06 11:05:43.831954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:52.502 [2024-11-06 11:05:43.831965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:52.502 [2024-11-06 11:05:43.831973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.831977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.831981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.831988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:52.502 [2024-11-06 11:05:43.832000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.502 [2024-11-06 11:05:43.832135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.502 [2024-11-06 11:05:43.832142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.502 [2024-11-06 11:05:43.832145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.502 [2024-11-06 11:05:43.832156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.832170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.502 [2024-11-06 11:05:43.832177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.832192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.502 [2024-11-06 11:05:43.832198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.832218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.502 [2024-11-06 11:05:43.832224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.832237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.502 [2024-11-06 11:05:43.832242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:52.502 [2024-11-06 11:05:43.832250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:52.502 [2024-11-06 11:05:43.832256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.502 [2024-11-06 11:05:43.832260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:23:52.502 [2024-11-06 11:05:43.832267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.503 [2024-11-06 11:05:43.832279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128100, cid 0, qid 0 00:23:52.503 [2024-11-06 11:05:43.832284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128280, cid 1, qid 0 00:23:52.503 [2024-11-06 11:05:43.832289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128400, cid 2, qid 0 00:23:52.503 [2024-11-06 11:05:43.832294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.503 [2024-11-06 11:05:43.832299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:23:52.503 [2024-11-06 11:05:43.832506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.503 [2024-11-06 11:05:43.832513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.503 [2024-11-06 11:05:43.832517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.832521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:23:52.503 [2024-11-06 11:05:43.832528] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:52.503 [2024-11-06 11:05:43.832533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:52.503 [2024-11-06 11:05:43.832542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:52.503 [2024-11-06 11:05:43.832549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:52.503 [2024-11-06 11:05:43.832555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.832559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.832562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:23:52.503 [2024-11-06 11:05:43.832569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:52.503 [2024-11-06 11:05:43.832579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:23:52.503 [2024-11-06 11:05:43.832776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.503 [2024-11-06 11:05:43.832782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.503 [2024-11-06 11:05:43.832786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.832790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:23:52.503 [2024-11-06 11:05:43.832857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:52.503 [2024-11-06 11:05:43.832867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:52.503 [2024-11-06 11:05:43.832875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.832879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:23:52.503 [2024-11-06 11:05:43.832885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.503 [2024-11-06 11:05:43.832896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:23:52.503 [2024-11-06 11:05:43.833060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.503 [2024-11-06 11:05:43.833066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.503 [2024-11-06 11:05:43.833070] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.833074] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=4 00:23:52.503 [2024-11-06 11:05:43.833078] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:23:52.503 [2024-11-06 11:05:43.833083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.833094] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.833098] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.877753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.503 [2024-11-06 11:05:43.877762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.503 [2024-11-06 11:05:43.877766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.877770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:23:52.503 [2024-11-06 11:05:43.877780] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:52.503 [2024-11-06 11:05:43.877790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:52.503 [2024-11-06 11:05:43.877799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:52.503 [2024-11-06 11:05:43.877806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.877810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:23:52.503 [2024-11-06 11:05:43.877817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.503 [2024-11-06 11:05:43.877829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:23:52.503 [2024-11-06 11:05:43.877994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.503 [2024-11-06 11:05:43.878001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.503 [2024-11-06 11:05:43.878004] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.878008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=4 00:23:52.503 [2024-11-06 11:05:43.878013] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:23:52.503 [2024-11-06 11:05:43.878017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.878031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.503 [2024-11-06 11:05:43.878035] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.918901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.767 [2024-11-06 11:05:43.918914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.767 [2024-11-06 11:05:43.918917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.918922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:23:52.767 [2024-11-06 11:05:43.918936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.918946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.918953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.918957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:23:52.767 [2024-11-06 11:05:43.918964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.767 [2024-11-06 11:05:43.918975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:23:52.767 [2024-11-06 11:05:43.919208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.767 [2024-11-06 11:05:43.919215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.767 [2024-11-06 11:05:43.919218] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.919222] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=4 00:23:52.767 [2024-11-06 11:05:43.919226] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:23:52.767 [2024-11-06 11:05:43.919231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.919245] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.919249] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.959917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.767 [2024-11-06 11:05:43.959926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.767 [2024-11-06 11:05:43.959930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.959934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:23:52.767 [2024-11-06 11:05:43.959942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.959950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.959959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.959965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.959971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.959976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.959982] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:52.767 [2024-11-06 11:05:43.959987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:52.767 [2024-11-06 11:05:43.959992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:52.767 [2024-11-06 11:05:43.960006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:23:52.767 [2024-11-06 11:05:43.960019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.767 [2024-11-06 11:05:43.960026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:23:52.767 [2024-11-06 11:05:43.960040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.767 [2024-11-06 11:05:43.960055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:23:52.767 [2024-11-06 11:05:43.960060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:23:52.767 [2024-11-06 11:05:43.960246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.767 [2024-11-06 11:05:43.960252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.767 [2024-11-06 11:05:43.960256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:23:52.767 [2024-11-06 11:05:43.960266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.767 [2024-11-06 11:05:43.960272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.767 [2024-11-06 11:05:43.960276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:23:52.767 [2024-11-06 11:05:43.960289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:23:52.767 [2024-11-06 11:05:43.960299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.767 [2024-11-06 11:05:43.960309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:23:52.767 [2024-11-06 11:05:43.960482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.767 [2024-11-06 11:05:43.960488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.767 [2024-11-06 11:05:43.960492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:23:52.767 [2024-11-06 11:05:43.960504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:23:52.767 [2024-11-06 11:05:43.960515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.767 [2024-11-06 11:05:43.960525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:23:52.767 [2024-11-06 11:05:43.960708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.767 [2024-11-06 11:05:43.960714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.767 [2024-11-06 11:05:43.960717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:23:52.767 [2024-11-06 11:05:43.960730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.767 [2024-11-06 11:05:43.960734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:23:52.767 [2024-11-06 11:05:43.960741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.767 [2024-11-06 11:05:43.960757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:23:52.768 [2024-11-06 11:05:43.960977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.768 [2024-11-06 11:05:43.960984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.768 [2024-11-06 11:05:43.960987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.960991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:23:52.768 [2024-11-06 11:05:43.961005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10c6690) 00:23:52.768 [2024-11-06 11:05:43.961016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.768 [2024-11-06 11:05:43.961023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10c6690) 00:23:52.768 [2024-11-06 11:05:43.961033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.768 [2024-11-06 11:05:43.961042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10c6690) 00:23:52.768 [2024-11-06 11:05:43.961052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.768 [2024-11-06 11:05:43.961059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10c6690) 00:23:52.768 [2024-11-06 11:05:43.961069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.768 [2024-11-06 11:05:43.961080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128880, cid 5, qid 0 00:23:52.768 [2024-11-06 11:05:43.961086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128700, cid 4, qid 0 00:23:52.768 [2024-11-06 11:05:43.961091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128a00, cid 6, qid 0 00:23:52.768 [2024-11-06 11:05:43.961095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128b80, cid 7, qid 0 00:23:52.768 [2024-11-06 11:05:43.961352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.768 [2024-11-06 11:05:43.961358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.768 [2024-11-06 11:05:43.961362] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961365] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=8192, cccid=5 00:23:52.768 [2024-11-06 11:05:43.961370] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128880) on tqpair(0x10c6690): expected_datao=0, payload_size=8192 00:23:52.768 [2024-11-06 11:05:43.961374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961441] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961445] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.768 [2024-11-06 11:05:43.961457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.768 [2024-11-06 11:05:43.961460] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961464] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=512, cccid=4 00:23:52.768 [2024-11-06 11:05:43.961469] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128700) on tqpair(0x10c6690): expected_datao=0, payload_size=512 00:23:52.768 [2024-11-06 11:05:43.961475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961482] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961485] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.768 [2024-11-06 11:05:43.961497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.768 [2024-11-06 11:05:43.961500] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961504] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=512, cccid=6 00:23:52.768 [2024-11-06 11:05:43.961508] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128a00) on tqpair(0x10c6690): expected_datao=0, payload_size=512 00:23:52.768 [2024-11-06 11:05:43.961512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961519] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961522] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:52.768 [2024-11-06 11:05:43.961534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:52.768 [2024-11-06 11:05:43.961537] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961541] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10c6690): datao=0, datal=4096, cccid=7 00:23:52.768 [2024-11-06 11:05:43.961545] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1128b80) on tqpair(0x10c6690): expected_datao=0, payload_size=4096 00:23:52.768 [2024-11-06 11:05:43.961549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961556] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961559] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.768 [2024-11-06 11:05:43.961576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.768 [2024-11-06 11:05:43.961580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128880) on tqpair=0x10c6690 00:23:52.768 [2024-11-06 11:05:43.961598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.768 [2024-11-06 11:05:43.961604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.768 [2024-11-06 11:05:43.961607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128700) on tqpair=0x10c6690 00:23:52.768 [2024-11-06 11:05:43.961622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.768 [2024-11-06 11:05:43.961627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.768 [2024-11-06 11:05:43.961631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128a00) on tqpair=0x10c6690 00:23:52.768 [2024-11-06 11:05:43.961642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.768 [2024-11-06 11:05:43.961648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.768 [2024-11-06 11:05:43.961651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.768 [2024-11-06 11:05:43.961655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128b80) on tqpair=0x10c6690 00:23:52.768 ===================================================== 00:23:52.768 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.768 ===================================================== 00:23:52.768 Controller Capabilities/Features 00:23:52.768 ================================ 00:23:52.768 Vendor ID: 8086 00:23:52.768 Subsystem Vendor ID: 8086 00:23:52.768 Serial Number: SPDK00000000000001 00:23:52.768 Model Number: SPDK bdev Controller 00:23:52.768 Firmware Version: 25.01 00:23:52.768 Recommended Arb Burst: 6 00:23:52.768 IEEE OUI Identifier: e4 d2 5c 00:23:52.768 Multi-path I/O 00:23:52.768 May have multiple subsystem ports: Yes 00:23:52.768 May have multiple controllers: Yes 00:23:52.768 Associated with SR-IOV VF: No 00:23:52.768 Max Data Transfer Size: 131072 00:23:52.768 Max Number of Namespaces: 32 00:23:52.768 Max Number of I/O Queues: 127 00:23:52.768 NVMe Specification Version (VS): 1.3 00:23:52.768 NVMe Specification Version (Identify): 1.3 00:23:52.768 Maximum Queue Entries: 128 00:23:52.768 Contiguous Queues Required: Yes 00:23:52.768 Arbitration Mechanisms Supported 00:23:52.768 Weighted Round Robin: Not Supported 00:23:52.768 Vendor Specific: Not Supported 00:23:52.768 Reset Timeout: 15000 ms 00:23:52.768 Doorbell Stride: 4 bytes 00:23:52.768 NVM Subsystem Reset: Not Supported 00:23:52.768 Command Sets Supported 00:23:52.768 NVM Command Set: Supported 00:23:52.768 Boot Partition: Not Supported 00:23:52.768 Memory Page Size Minimum: 4096 bytes 00:23:52.768 Memory Page Size Maximum: 4096 bytes 00:23:52.768 Persistent Memory Region: Not Supported 00:23:52.768 Optional Asynchronous Events Supported 00:23:52.768 Namespace Attribute Notices: Supported 00:23:52.768 Firmware Activation Notices: Not Supported 00:23:52.768 ANA Change Notices: Not Supported 00:23:52.768 PLE Aggregate Log Change Notices: Not Supported 00:23:52.768 LBA Status Info Alert Notices: Not Supported 00:23:52.768 EGE Aggregate Log Change Notices: Not Supported 00:23:52.768 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.768 Zone Descriptor Change Notices: Not Supported 00:23:52.768 Discovery Log Change Notices: Not Supported 00:23:52.768 Controller Attributes 00:23:52.768 128-bit Host Identifier: Supported 00:23:52.768 Non-Operational Permissive Mode: Not Supported 00:23:52.768 NVM Sets: Not Supported 00:23:52.768 Read Recovery Levels: Not Supported 00:23:52.768 Endurance Groups: Not Supported 00:23:52.768 Predictable Latency Mode: Not Supported 00:23:52.768 Traffic Based Keep ALive: Not Supported 00:23:52.768 Namespace Granularity: Not Supported 00:23:52.768 SQ Associations: Not Supported 00:23:52.768 UUID List: Not Supported 00:23:52.768 Multi-Domain Subsystem: Not Supported 00:23:52.768 Fixed Capacity Management: Not Supported 00:23:52.768 Variable Capacity Management: Not Supported 00:23:52.768 Delete Endurance Group: Not Supported 00:23:52.768 Delete NVM Set: Not Supported 00:23:52.768 Extended LBA Formats Supported: Not Supported 00:23:52.768 Flexible Data Placement Supported: Not Supported 00:23:52.768 00:23:52.768 Controller Memory Buffer Support 00:23:52.768 ================================ 00:23:52.768 Supported: No 00:23:52.768 00:23:52.768 Persistent Memory Region Support 00:23:52.768 ================================ 00:23:52.768 Supported: No 00:23:52.769 00:23:52.769 Admin Command Set Attributes 00:23:52.769 ============================ 00:23:52.769 Security Send/Receive: Not Supported 00:23:52.769 Format NVM: Not Supported 00:23:52.769 Firmware Activate/Download: Not Supported 00:23:52.769 Namespace Management: Not Supported 00:23:52.769 Device Self-Test: Not Supported 00:23:52.769 Directives: Not Supported 00:23:52.769 NVMe-MI: Not Supported 00:23:52.769 Virtualization Management: Not Supported 00:23:52.769 Doorbell Buffer Config: Not Supported 00:23:52.769 Get LBA Status Capability: Not Supported 00:23:52.769 Command & Feature Lockdown Capability: Not Supported 00:23:52.769 Abort Command Limit: 4 00:23:52.769 Async Event Request Limit: 4 00:23:52.769 Number of Firmware Slots: N/A 00:23:52.769 Firmware Slot 1 Read-Only: N/A 00:23:52.769 Firmware Activation Without Reset: N/A 00:23:52.769 Multiple Update Detection Support: N/A 00:23:52.769 Firmware Update Granularity: No Information Provided 00:23:52.769 Per-Namespace SMART Log: No 00:23:52.769 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.769 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:52.769 Command Effects Log Page: Supported 00:23:52.769 Get Log Page Extended Data: Supported 00:23:52.769 Telemetry Log Pages: Not Supported 00:23:52.769 Persistent Event Log Pages: Not Supported 00:23:52.769 Supported Log Pages Log Page: May Support 00:23:52.769 Commands Supported & Effects Log Page: Not Supported 00:23:52.769 Feature Identifiers & Effects Log Page:May Support 00:23:52.769 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.769 Data Area 4 for Telemetry Log: Not Supported 00:23:52.769 Error Log Page Entries Supported: 128 00:23:52.769 Keep Alive: Supported 00:23:52.769 Keep Alive Granularity: 10000 ms 00:23:52.769 00:23:52.769 NVM Command Set Attributes 00:23:52.769 ========================== 00:23:52.769 Submission Queue Entry Size 00:23:52.769 Max: 64 00:23:52.769 Min: 64 00:23:52.769 Completion Queue Entry Size 00:23:52.769 Max: 16 00:23:52.769 Min: 16 00:23:52.769 Number of Namespaces: 32 00:23:52.769 Compare Command: Supported 00:23:52.769 Write Uncorrectable Command: Not Supported 00:23:52.769 Dataset Management Command: Supported 00:23:52.769 Write Zeroes Command: Supported 00:23:52.769 Set Features Save Field: Not Supported 00:23:52.769 Reservations: Supported 00:23:52.769 Timestamp: Not Supported 00:23:52.769 Copy: Supported 00:23:52.769 Volatile Write Cache: Present 00:23:52.769 Atomic Write Unit (Normal): 1 00:23:52.769 Atomic Write Unit (PFail): 1 00:23:52.769 Atomic Compare & Write Unit: 1 00:23:52.769 Fused Compare & Write: Supported 00:23:52.769 Scatter-Gather List 00:23:52.769 SGL Command Set: Supported 00:23:52.769 SGL Keyed: Supported 00:23:52.769 SGL Bit Bucket Descriptor: Not Supported 00:23:52.769 SGL Metadata Pointer: Not Supported 00:23:52.769 Oversized SGL: Not Supported 00:23:52.769 SGL Metadata Address: Not Supported 00:23:52.769 SGL Offset: Supported 00:23:52.769 Transport SGL Data Block: Not Supported 00:23:52.769 Replay Protected Memory Block: Not Supported 00:23:52.769 00:23:52.769 Firmware Slot Information 00:23:52.769 ========================= 00:23:52.769 Active slot: 1 00:23:52.769 Slot 1 Firmware Revision: 25.01 00:23:52.769 00:23:52.769 00:23:52.769 Commands Supported and Effects 00:23:52.769 ============================== 00:23:52.769 Admin Commands 00:23:52.769 -------------- 00:23:52.769 Get Log Page (02h): Supported 00:23:52.769 Identify (06h): Supported 00:23:52.769 Abort (08h): Supported 00:23:52.769 Set Features (09h): Supported 00:23:52.769 Get Features (0Ah): Supported 00:23:52.769 Asynchronous Event Request (0Ch): Supported 00:23:52.769 Keep Alive (18h): Supported 00:23:52.769 I/O Commands 00:23:52.769 ------------ 00:23:52.769 Flush (00h): Supported LBA-Change 00:23:52.769 Write (01h): Supported LBA-Change 00:23:52.769 Read (02h): Supported 00:23:52.769 Compare (05h): Supported 00:23:52.769 Write Zeroes (08h): Supported LBA-Change 00:23:52.769 Dataset Management (09h): Supported LBA-Change 00:23:52.769 Copy (19h): Supported LBA-Change 00:23:52.769 00:23:52.769 Error Log 00:23:52.769 ========= 00:23:52.769 00:23:52.769 Arbitration 00:23:52.769 =========== 00:23:52.769 Arbitration Burst: 1 00:23:52.769 00:23:52.769 Power Management 00:23:52.769 ================ 00:23:52.769 Number of Power States: 1 00:23:52.769 Current Power State: Power State #0 00:23:52.769 Power State #0: 00:23:52.769 Max Power: 0.00 W 00:23:52.769 Non-Operational State: Operational 00:23:52.769 Entry Latency: Not Reported 00:23:52.769 Exit Latency: Not Reported 00:23:52.769 Relative Read Throughput: 0 00:23:52.769 Relative Read Latency: 0 00:23:52.769 Relative Write Throughput: 0 00:23:52.769 Relative Write Latency: 0 00:23:52.769 Idle Power: Not Reported 00:23:52.769 Active Power: Not Reported 00:23:52.769 Non-Operational Permissive Mode: Not Supported 00:23:52.769 00:23:52.769 Health Information 00:23:52.769 ================== 00:23:52.769 Critical Warnings: 00:23:52.769 Available Spare Space: OK 00:23:52.769 Temperature: OK 00:23:52.769 Device Reliability: OK 00:23:52.769 Read Only: No 00:23:52.769 Volatile Memory Backup: OK 00:23:52.769 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:52.769 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:52.769 Available Spare: 0% 00:23:52.769 Available Spare Threshold: 0% 00:23:52.769 Life Percentage Used:[2024-11-06 11:05:43.965759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.965767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10c6690) 00:23:52.769 [2024-11-06 11:05:43.965774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.769 [2024-11-06 11:05:43.965788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128b80, cid 7, qid 0 00:23:52.769 [2024-11-06 11:05:43.965942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.769 [2024-11-06 11:05:43.965949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.769 [2024-11-06 11:05:43.965953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.965957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128b80) on tqpair=0x10c6690 00:23:52.769 [2024-11-06 11:05:43.965988] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:52.769 [2024-11-06 11:05:43.965997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128100) on tqpair=0x10c6690 00:23:52.769 [2024-11-06 11:05:43.966003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.769 [2024-11-06 11:05:43.966009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128280) on tqpair=0x10c6690 00:23:52.769 [2024-11-06 11:05:43.966014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.769 [2024-11-06 11:05:43.966019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128400) on tqpair=0x10c6690 00:23:52.769 [2024-11-06 11:05:43.966023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.769 [2024-11-06 11:05:43.966028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.769 [2024-11-06 11:05:43.966033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.769 [2024-11-06 11:05:43.966041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.966045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.966048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.769 [2024-11-06 11:05:43.966056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.769 [2024-11-06 11:05:43.966067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.769 [2024-11-06 11:05:43.966261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.769 [2024-11-06 11:05:43.966268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.769 [2024-11-06 11:05:43.966271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.966275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.769 [2024-11-06 11:05:43.966282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.966286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.966289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.769 [2024-11-06 11:05:43.966296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.769 [2024-11-06 11:05:43.966309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.769 [2024-11-06 11:05:43.966512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.769 [2024-11-06 11:05:43.966518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.769 [2024-11-06 11:05:43.966522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.769 [2024-11-06 11:05:43.966526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.769 [2024-11-06 11:05:43.966530] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:52.769 [2024-11-06 11:05:43.966535] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:52.769 [2024-11-06 11:05:43.966544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.966551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.966555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.966562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.966572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.966731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.966737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.966741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.966745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.966759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.966764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.966767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.966774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.966784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.966999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.967005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.967009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.967022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.967036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.967046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.967236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.967243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.967246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.967260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.967274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.967284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.967471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.967478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.967481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.967495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.967513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.967523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.967741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.967751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.967754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.967768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.967776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.967783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.967793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.968009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.968016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.968019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.968033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.968047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.968057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.968205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.968211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.968215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.968228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.968242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.968252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.968421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.968427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.968431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.968444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.968461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.968472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.968613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.968619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.968623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.968636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.968651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.968661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.968830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.968837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.968840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.968854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.968861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.968868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.968878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.969050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.969056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.969060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.969063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.969073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.969077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.969081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.770 [2024-11-06 11:05:43.969087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.770 [2024-11-06 11:05:43.969098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.770 [2024-11-06 11:05:43.969264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.770 [2024-11-06 11:05:43.969270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.770 [2024-11-06 11:05:43.969274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.770 [2024-11-06 11:05:43.969278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.770 [2024-11-06 11:05:43.969287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.969292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.969295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.771 [2024-11-06 11:05:43.969302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.771 [2024-11-06 11:05:43.969314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.771 [2024-11-06 11:05:43.969535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.771 [2024-11-06 11:05:43.969542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.771 [2024-11-06 11:05:43.969545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.969549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.771 [2024-11-06 11:05:43.969559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.969563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.969567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.771 [2024-11-06 11:05:43.969573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.771 [2024-11-06 11:05:43.969583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.771 [2024-11-06 11:05:43.973753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.771 [2024-11-06 11:05:43.973762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.771 [2024-11-06 11:05:43.973766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.973769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.771 [2024-11-06 11:05:43.973779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.973783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.973787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10c6690) 00:23:52.771 [2024-11-06 11:05:43.973794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.771 [2024-11-06 11:05:43.973805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1128580, cid 3, qid 0 00:23:52.771 [2024-11-06 11:05:43.974005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:52.771 [2024-11-06 11:05:43.974012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:52.771 [2024-11-06 11:05:43.974015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:52.771 [2024-11-06 11:05:43.974019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1128580) on tqpair=0x10c6690 00:23:52.771 [2024-11-06 11:05:43.974026] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:23:52.771 0% 00:23:52.771 Data Units Read: 0 00:23:52.771 Data Units Written: 0 00:23:52.771 Host Read Commands: 0 00:23:52.771 Host Write Commands: 0 00:23:52.771 Controller Busy Time: 0 minutes 00:23:52.771 Power Cycles: 0 00:23:52.771 Power On Hours: 0 hours 00:23:52.771 Unsafe Shutdowns: 0 00:23:52.771 Unrecoverable Media Errors: 0 00:23:52.771 Lifetime Error Log Entries: 0 00:23:52.771 Warning Temperature Time: 0 minutes 00:23:52.771 Critical Temperature Time: 0 minutes 00:23:52.771 00:23:52.771 Number of Queues 00:23:52.771 ================ 00:23:52.771 Number of I/O Submission Queues: 127 00:23:52.771 Number of I/O Completion Queues: 127 00:23:52.771 00:23:52.771 Active Namespaces 00:23:52.771 ================= 00:23:52.771 Namespace ID:1 00:23:52.771 Error Recovery Timeout: Unlimited 00:23:52.771 Command Set Identifier: NVM (00h) 00:23:52.771 Deallocate: Supported 00:23:52.771 Deallocated/Unwritten Error: Not Supported 00:23:52.771 Deallocated Read Value: Unknown 00:23:52.771 Deallocate in Write Zeroes: Not Supported 00:23:52.771 Deallocated Guard Field: 0xFFFF 00:23:52.771 Flush: Supported 00:23:52.771 Reservation: Supported 00:23:52.771 Namespace Sharing Capabilities: Multiple Controllers 00:23:52.771 Size (in LBAs): 131072 (0GiB) 00:23:52.771 Capacity (in LBAs): 131072 (0GiB) 00:23:52.771 Utilization (in LBAs): 131072 (0GiB) 00:23:52.771 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:52.771 EUI64: ABCDEF0123456789 00:23:52.771 UUID: 4ab341cf-c6eb-4690-b2e9-cf587d3dfa96 00:23:52.771 Thin Provisioning: Not Supported 00:23:52.771 Per-NS Atomic Units: Yes 00:23:52.771 Atomic Boundary Size (Normal): 0 00:23:52.771 Atomic Boundary Size (PFail): 0 00:23:52.771 Atomic Boundary Offset: 0 00:23:52.771 Maximum Single Source Range Length: 65535 00:23:52.771 Maximum Copy Length: 65535 00:23:52.771 Maximum Source Range Count: 1 00:23:52.771 NGUID/EUI64 Never Reused: No 00:23:52.771 Namespace Write Protected: No 00:23:52.771 Number of LBA Formats: 1 00:23:52.771 Current LBA Format: LBA Format #00 00:23:52.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.771 00:23:52.771 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:52.771 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.771 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.771 11:05:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.771 rmmod nvme_tcp 00:23:52.771 rmmod nvme_fabrics 00:23:52.771 rmmod nvme_keyring 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3349302 ']' 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3349302 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3349302 ']' 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3349302 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3349302 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3349302' 00:23:52.771 killing process with pid 3349302 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3349302 00:23:52.771 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3349302 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.032 11:05:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.945 11:05:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.945 00:23:54.945 real 0m11.641s 00:23:54.945 user 0m9.125s 00:23:54.945 sys 0m5.982s 00:23:54.945 11:05:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:54.945 11:05:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.945 ************************************ 00:23:54.945 END TEST nvmf_identify 00:23:54.945 ************************************ 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.206 ************************************ 00:23:55.206 START TEST nvmf_perf 00:23:55.206 ************************************ 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:55.206 * Looking for test storage... 00:23:55.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.206 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.468 --rc genhtml_branch_coverage=1 00:23:55.468 --rc genhtml_function_coverage=1 00:23:55.468 --rc genhtml_legend=1 00:23:55.468 --rc geninfo_all_blocks=1 00:23:55.468 --rc geninfo_unexecuted_blocks=1 00:23:55.468 00:23:55.468 ' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.468 --rc genhtml_branch_coverage=1 00:23:55.468 --rc genhtml_function_coverage=1 00:23:55.468 --rc genhtml_legend=1 00:23:55.468 --rc geninfo_all_blocks=1 00:23:55.468 --rc geninfo_unexecuted_blocks=1 00:23:55.468 00:23:55.468 ' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.468 --rc genhtml_branch_coverage=1 00:23:55.468 --rc genhtml_function_coverage=1 00:23:55.468 --rc genhtml_legend=1 00:23:55.468 --rc geninfo_all_blocks=1 00:23:55.468 --rc geninfo_unexecuted_blocks=1 00:23:55.468 00:23:55.468 ' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.468 --rc genhtml_branch_coverage=1 00:23:55.468 --rc genhtml_function_coverage=1 00:23:55.468 --rc genhtml_legend=1 00:23:55.468 --rc geninfo_all_blocks=1 00:23:55.468 --rc geninfo_unexecuted_blocks=1 00:23:55.468 00:23:55.468 ' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.468 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.469 11:05:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.611 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.611 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.611 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.612 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.612 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.612 11:05:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:24:03.612 00:24:03.612 --- 10.0.0.2 ping statistics --- 00:24:03.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.612 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:24:03.612 00:24:03.612 --- 10.0.0.1 ping statistics --- 00:24:03.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.612 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3353834 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3353834 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3353834 ']' 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.612 [2024-11-06 11:05:54.163900] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:24:03.612 [2024-11-06 11:05:54.163949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.612 [2024-11-06 11:05:54.241759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.612 [2024-11-06 11:05:54.277879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.612 [2024-11-06 11:05:54.277913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.612 [2024-11-06 11:05:54.277921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.612 [2024-11-06 11:05:54.277928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.612 [2024-11-06 11:05:54.277934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.612 [2024-11-06 11:05:54.279445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.612 [2024-11-06 11:05:54.279563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.612 [2024-11-06 11:05:54.279722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.612 [2024-11-06 11:05:54.279723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.612 11:05:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.612 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.612 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:03.612 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:04.183 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:04.183 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:04.443 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:04.443 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:04.705 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:04.705 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:04.705 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:04.705 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:04.705 11:05:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:04.705 [2024-11-06 11:05:56.044768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.705 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.966 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:04.966 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.226 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:05.226 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:05.226 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.487 [2024-11-06 11:05:56.771413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.487 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:05.747 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:05.747 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:05.747 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:05.747 11:05:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:07.131 Initializing NVMe Controllers 00:24:07.131 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:07.131 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:07.131 Initialization complete. Launching workers. 00:24:07.131 ======================================================== 00:24:07.131 Latency(us) 00:24:07.131 Device Information : IOPS MiB/s Average min max 00:24:07.131 PCIE (0000:65:00.0) NSID 1 from core 0: 79447.27 310.34 402.76 13.26 5505.43 00:24:07.131 ======================================================== 00:24:07.131 Total : 79447.27 310.34 402.76 13.26 5505.43 00:24:07.131 00:24:07.131 11:05:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:08.517 Initializing NVMe Controllers 00:24:08.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:08.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:08.517 Initialization complete. Launching workers. 00:24:08.517 ======================================================== 00:24:08.517 Latency(us) 00:24:08.517 Device Information : IOPS MiB/s Average min max 00:24:08.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 110.00 0.43 9132.67 256.15 45630.32 00:24:08.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17106.63 7956.04 47900.73 00:24:08.517 ======================================================== 00:24:08.517 Total : 171.00 0.67 11977.18 256.15 47900.73 00:24:08.517 00:24:08.517 11:05:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:09.459 Initializing NVMe Controllers 00:24:09.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.459 Initialization complete. Launching workers. 00:24:09.459 ======================================================== 00:24:09.459 Latency(us) 00:24:09.459 Device Information : IOPS MiB/s Average min max 00:24:09.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10279.04 40.15 3121.46 502.06 46186.44 00:24:09.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3819.15 14.92 8391.30 5369.17 15986.90 00:24:09.459 ======================================================== 00:24:09.459 Total : 14098.19 55.07 4549.04 502.06 46186.44 00:24:09.459 00:24:09.459 11:06:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:09.459 11:06:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:09.459 11:06:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.001 Initializing NVMe Controllers 00:24:12.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.001 Controller IO queue size 128, less than required. 00:24:12.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.001 Controller IO queue size 128, less than required. 00:24:12.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.001 Initialization complete. Launching workers. 00:24:12.001 ======================================================== 00:24:12.001 Latency(us) 00:24:12.001 Device Information : IOPS MiB/s Average min max 00:24:12.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.38 399.09 81163.57 48029.32 130298.38 00:24:12.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.09 146.77 224357.59 64988.91 341416.35 00:24:12.001 ======================================================== 00:24:12.001 Total : 2183.47 545.87 119665.40 48029.32 341416.35 00:24:12.001 00:24:12.001 11:06:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:12.261 No valid NVMe controllers or AIO or URING devices found 00:24:12.261 Initializing NVMe Controllers 00:24:12.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.261 Controller IO queue size 128, less than required. 00:24:12.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.261 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:12.261 Controller IO queue size 128, less than required. 00:24:12.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.261 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:12.261 WARNING: Some requested NVMe devices were skipped 00:24:12.261 11:06:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:14.907 Initializing NVMe Controllers 00:24:14.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.907 Controller IO queue size 128, less than required. 00:24:14.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.907 Controller IO queue size 128, less than required. 00:24:14.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:14.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:14.907 Initialization complete. Launching workers. 00:24:14.907 00:24:14.907 ==================== 00:24:14.907 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:14.907 TCP transport: 00:24:14.907 polls: 17199 00:24:14.907 idle_polls: 9212 00:24:14.907 sock_completions: 7987 00:24:14.907 nvme_completions: 6113 00:24:14.907 submitted_requests: 9140 00:24:14.907 queued_requests: 1 00:24:14.907 00:24:14.907 ==================== 00:24:14.907 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:14.907 TCP transport: 00:24:14.907 polls: 17015 00:24:14.907 idle_polls: 8145 00:24:14.907 sock_completions: 8870 00:24:14.907 nvme_completions: 8423 00:24:14.907 submitted_requests: 12506 00:24:14.907 queued_requests: 1 00:24:14.907 ======================================================== 00:24:14.907 Latency(us) 00:24:14.907 Device Information : IOPS MiB/s Average min max 00:24:14.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1527.96 381.99 85132.85 55098.70 147633.75 00:24:14.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2105.45 526.36 61342.87 30689.71 111760.35 00:24:14.907 ======================================================== 00:24:14.907 Total : 3633.41 908.35 71347.30 30689.71 147633.75 00:24:14.907 00:24:14.907 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:14.907 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.168 rmmod nvme_tcp 00:24:15.168 rmmod nvme_fabrics 00:24:15.168 rmmod nvme_keyring 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3353834 ']' 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3353834 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3353834 ']' 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3353834 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3353834 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3353834' 00:24:15.168 killing process with pid 3353834 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3353834 00:24:15.168 11:06:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3353834 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.078 11:06:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.623 00:24:19.623 real 0m24.125s 00:24:19.623 user 0m58.367s 00:24:19.623 sys 0m8.272s 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.623 ************************************ 00:24:19.623 END TEST nvmf_perf 00:24:19.623 ************************************ 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.623 ************************************ 00:24:19.623 START TEST nvmf_fio_host 00:24:19.623 ************************************ 00:24:19.623 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:19.623 * Looking for test storage... 00:24:19.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.624 --rc genhtml_branch_coverage=1 00:24:19.624 --rc genhtml_function_coverage=1 00:24:19.624 --rc genhtml_legend=1 00:24:19.624 --rc geninfo_all_blocks=1 00:24:19.624 --rc geninfo_unexecuted_blocks=1 00:24:19.624 00:24:19.624 ' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.624 --rc genhtml_branch_coverage=1 00:24:19.624 --rc genhtml_function_coverage=1 00:24:19.624 --rc genhtml_legend=1 00:24:19.624 --rc geninfo_all_blocks=1 00:24:19.624 --rc geninfo_unexecuted_blocks=1 00:24:19.624 00:24:19.624 ' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.624 --rc genhtml_branch_coverage=1 00:24:19.624 --rc genhtml_function_coverage=1 00:24:19.624 --rc genhtml_legend=1 00:24:19.624 --rc geninfo_all_blocks=1 00:24:19.624 --rc geninfo_unexecuted_blocks=1 00:24:19.624 00:24:19.624 ' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:19.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.624 --rc genhtml_branch_coverage=1 00:24:19.624 --rc genhtml_function_coverage=1 00:24:19.624 --rc genhtml_legend=1 00:24:19.624 --rc geninfo_all_blocks=1 00:24:19.624 --rc geninfo_unexecuted_blocks=1 00:24:19.624 00:24:19.624 ' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.624 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.625 11:06:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:26.220 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:26.220 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.220 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:26.221 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:26.221 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:24:26.221 00:24:26.221 --- 10.0.0.2 ping statistics --- 00:24:26.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.221 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:24:26.221 00:24:26.221 --- 10.0.0.1 ping statistics --- 00:24:26.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.221 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.221 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3361196 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3361196 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3361196 ']' 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:26.483 11:06:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.483 [2024-11-06 11:06:17.754908] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:24:26.483 [2024-11-06 11:06:17.754977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.483 [2024-11-06 11:06:17.838046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.483 [2024-11-06 11:06:17.879830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.483 [2024-11-06 11:06:17.879869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.483 [2024-11-06 11:06:17.879877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.483 [2024-11-06 11:06:17.879884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.483 [2024-11-06 11:06:17.879889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.483 [2024-11-06 11:06:17.881744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.483 [2024-11-06 11:06:17.881897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.483 [2024-11-06 11:06:17.882143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.483 [2024-11-06 11:06:17.882144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.426 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:27.427 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:27.427 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:27.427 [2024-11-06 11:06:18.713662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.427 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:27.427 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.427 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.427 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:27.688 Malloc1 00:24:27.688 11:06:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.949 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:27.949 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.211 [2024-11-06 11:06:19.505518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.211 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:28.471 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:28.471 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:28.471 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:28.471 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:28.471 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:28.471 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:28.472 11:06:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:28.732 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:28.732 fio-3.35 00:24:28.732 Starting 1 thread 00:24:31.278 00:24:31.278 test: (groupid=0, jobs=1): err= 0: pid=3361991: Wed Nov 6 11:06:22 2024 00:24:31.278 read: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(93.4MiB/2005msec) 00:24:31.278 slat (usec): min=2, max=224, avg= 2.14, stdev= 2.04 00:24:31.278 clat (usec): min=2953, max=9017, avg=5903.70, stdev=1162.80 00:24:31.278 lat (usec): min=2981, max=9020, avg=5905.84, stdev=1162.79 00:24:31.278 clat percentiles (usec): 00:24:31.278 | 1.00th=[ 4424], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 4948], 00:24:31.278 | 30.00th=[ 5080], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5538], 00:24:31.278 | 70.00th=[ 6849], 80.00th=[ 7308], 90.00th=[ 7701], 95.00th=[ 7963], 00:24:31.278 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8717], 99.95th=[ 8848], 00:24:31.278 | 99.99th=[ 8848] 00:24:31.278 bw ( KiB/s): min=36752, max=55784, per=99.97%, avg=47700.00, stdev=9456.71, samples=4 00:24:31.278 iops : min= 9188, max=13946, avg=11925.00, stdev=2364.18, samples=4 00:24:31.278 write: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(93.0MiB/2005msec); 0 zone resets 00:24:31.278 slat (usec): min=2, max=215, avg= 2.22, stdev= 1.53 00:24:31.278 clat (usec): min=2306, max=7960, avg=4767.55, stdev=939.82 00:24:31.278 lat (usec): min=2320, max=7962, avg=4769.77, stdev=939.84 00:24:31.278 clat percentiles (usec): 00:24:31.278 | 1.00th=[ 3523], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 3982], 00:24:31.278 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4490], 00:24:31.278 | 70.00th=[ 5538], 80.00th=[ 5866], 90.00th=[ 6194], 95.00th=[ 6390], 00:24:31.278 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7177], 99.95th=[ 7308], 00:24:31.278 | 99.99th=[ 7898] 00:24:31.278 bw ( KiB/s): min=37640, max=55432, per=100.00%, avg=47500.00, stdev=9058.75, samples=4 00:24:31.278 iops : min= 9410, max=13858, avg=11875.00, stdev=2264.69, samples=4 00:24:31.278 lat (msec) : 4=10.32%, 10=89.68% 00:24:31.278 cpu : usr=73.15%, sys=25.50%, ctx=44, majf=0, minf=17 00:24:31.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:31.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:31.278 issued rwts: total=23917,23810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:31.278 00:24:31.278 Run status group 0 (all jobs): 00:24:31.278 READ: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=93.4MiB (98.0MB), run=2005-2005msec 00:24:31.278 WRITE: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=93.0MiB (97.5MB), run=2005-2005msec 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:31.278 11:06:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.538 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:31.538 fio-3.35 00:24:31.538 Starting 1 thread 00:24:34.083 00:24:34.083 test: (groupid=0, jobs=1): err= 0: pid=3362623: Wed Nov 6 11:06:25 2024 00:24:34.083 read: IOPS=9264, BW=145MiB/s (152MB/s)(290MiB/2005msec) 00:24:34.083 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.57 00:24:34.083 clat (usec): min=1199, max=15498, avg=8328.90, stdev=1909.05 00:24:34.083 lat (usec): min=1202, max=15502, avg=8332.51, stdev=1909.17 00:24:34.083 clat percentiles (usec): 00:24:34.083 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6521], 00:24:34.083 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 8848], 00:24:34.083 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:24:34.083 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14615], 99.95th=[15008], 00:24:34.083 | 99.99th=[15401] 00:24:34.083 bw ( KiB/s): min=66752, max=85536, per=49.15%, avg=72848.00, stdev=8591.13, samples=4 00:24:34.083 iops : min= 4172, max= 5346, avg=4553.00, stdev=536.95, samples=4 00:24:34.083 write: IOPS=5461, BW=85.3MiB/s (89.5MB/s)(149MiB/1751msec); 0 zone resets 00:24:34.083 slat (usec): min=39, max=360, avg=40.97, stdev= 7.08 00:24:34.083 clat (usec): min=2796, max=17082, avg=9595.74, stdev=1546.07 00:24:34.083 lat (usec): min=2836, max=17122, avg=9636.71, stdev=1547.30 00:24:34.083 clat percentiles (usec): 00:24:34.083 | 1.00th=[ 6456], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8455], 00:24:34.083 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:24:34.083 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12518], 00:24:34.083 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15270], 99.95th=[15533], 00:24:34.083 | 99.99th=[17171] 00:24:34.083 bw ( KiB/s): min=69376, max=89088, per=86.99%, avg=76016.00, stdev=8954.87, samples=4 00:24:34.083 iops : min= 4336, max= 5568, avg=4751.00, stdev=559.68, samples=4 00:24:34.083 lat (msec) : 2=0.01%, 4=0.44%, 10=73.62%, 20=25.93% 00:24:34.083 cpu : usr=83.63%, sys=14.62%, ctx=17, majf=0, minf=31 00:24:34.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:34.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:34.083 issued rwts: total=18575,9563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:34.083 00:24:34.083 Run status group 0 (all jobs): 00:24:34.083 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=290MiB (304MB), run=2005-2005msec 00:24:34.083 WRITE: bw=85.3MiB/s (89.5MB/s), 85.3MiB/s-85.3MiB/s (89.5MB/s-89.5MB/s), io=149MiB (157MB), run=1751-1751msec 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.083 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.083 rmmod nvme_tcp 00:24:34.083 rmmod nvme_fabrics 00:24:34.083 rmmod nvme_keyring 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3361196 ']' 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3361196 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3361196 ']' 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3361196 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3361196 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3361196' 00:24:34.344 killing process with pid 3361196 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3361196 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3361196 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.344 11:06:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.890 00:24:36.890 real 0m17.167s 00:24:36.890 user 1m9.935s 00:24:36.890 sys 0m7.356s 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.890 ************************************ 00:24:36.890 END TEST nvmf_fio_host 00:24:36.890 ************************************ 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.890 ************************************ 00:24:36.890 START TEST nvmf_failover 00:24:36.890 ************************************ 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:36.890 * Looking for test storage... 00:24:36.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:36.890 11:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:36.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.890 --rc genhtml_branch_coverage=1 00:24:36.890 --rc genhtml_function_coverage=1 00:24:36.890 --rc genhtml_legend=1 00:24:36.890 --rc geninfo_all_blocks=1 00:24:36.890 --rc geninfo_unexecuted_blocks=1 00:24:36.890 00:24:36.890 ' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:36.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.890 --rc genhtml_branch_coverage=1 00:24:36.890 --rc genhtml_function_coverage=1 00:24:36.890 --rc genhtml_legend=1 00:24:36.890 --rc geninfo_all_blocks=1 00:24:36.890 --rc geninfo_unexecuted_blocks=1 00:24:36.890 00:24:36.890 ' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:36.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.890 --rc genhtml_branch_coverage=1 00:24:36.890 --rc genhtml_function_coverage=1 00:24:36.890 --rc genhtml_legend=1 00:24:36.890 --rc geninfo_all_blocks=1 00:24:36.890 --rc geninfo_unexecuted_blocks=1 00:24:36.890 00:24:36.890 ' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:36.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.890 --rc genhtml_branch_coverage=1 00:24:36.890 --rc genhtml_function_coverage=1 00:24:36.890 --rc genhtml_legend=1 00:24:36.890 --rc geninfo_all_blocks=1 00:24:36.890 --rc geninfo_unexecuted_blocks=1 00:24:36.890 00:24:36.890 ' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.890 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.891 11:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.032 11:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:45.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:45.032 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:45.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:45.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.032 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:24:45.033 00:24:45.033 --- 10.0.0.2 ping statistics --- 00:24:45.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.033 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:24:45.033 00:24:45.033 --- 10.0.0.1 ping statistics --- 00:24:45.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.033 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3367196 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3367196 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3367196 ']' 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:45.033 11:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.033 [2024-11-06 11:06:35.426421] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:24:45.033 [2024-11-06 11:06:35.426491] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.033 [2024-11-06 11:06:35.525730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:45.033 [2024-11-06 11:06:35.577929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.033 [2024-11-06 11:06:35.577986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.033 [2024-11-06 11:06:35.577995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.033 [2024-11-06 11:06:35.578002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.033 [2024-11-06 11:06:35.578008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.033 [2024-11-06 11:06:35.579808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.033 [2024-11-06 11:06:35.580009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.033 [2024-11-06 11:06:35.580010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.033 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:45.033 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:45.033 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.033 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:45.033 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.033 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.033 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.033 [2024-11-06 11:06:36.435434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.293 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:45.293 Malloc0 00:24:45.293 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.554 11:06:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.815 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.815 [2024-11-06 11:06:37.180205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.815 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:46.075 [2024-11-06 11:06:37.364697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:46.075 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:46.336 [2024-11-06 11:06:37.549253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3367780 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3367780 /var/tmp/bdevperf.sock 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3367780 ']' 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:46.336 11:06:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.277 11:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:47.277 11:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:47.277 11:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:47.277 NVMe0n1 00:24:47.538 11:06:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:47.798 00:24:47.798 11:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3368014 00:24:47.798 11:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.798 11:06:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:48.737 11:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.999 [2024-11-06 11:06:40.241378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:48.999 [2024-11-06 11:06:40.241778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 [2024-11-06 11:06:40.241815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22664e0 is same with the state(6) to be set 00:24:49.000 11:06:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:52.298 11:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:52.298 00:24:52.298 11:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:52.559 11:06:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:55.861 11:06:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.861 [2024-11-06 11:06:47.053377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.861 11:06:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:56.802 11:06:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:57.063 [2024-11-06 11:06:48.248666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.063 [2024-11-06 11:06:48.248890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.248996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 [2024-11-06 11:06:48.249226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c4e0 is same with the state(6) to be set 00:24:57.064 11:06:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3368014 00:25:03.661 { 00:25:03.661 "results": [ 00:25:03.661 { 00:25:03.661 "job": "NVMe0n1", 00:25:03.661 "core_mask": "0x1", 00:25:03.661 "workload": "verify", 00:25:03.661 "status": "finished", 00:25:03.661 "verify_range": { 00:25:03.661 "start": 0, 00:25:03.661 "length": 16384 00:25:03.661 }, 00:25:03.661 "queue_depth": 128, 00:25:03.661 "io_size": 4096, 00:25:03.661 "runtime": 15.006641, 00:25:03.661 "iops": 11117.144736120496, 00:25:03.661 "mibps": 43.426346625470686, 00:25:03.661 "io_failed": 12093, 00:25:03.661 "io_timeout": 0, 00:25:03.661 "avg_latency_us": 10708.30020880374, 00:25:03.661 "min_latency_us": 781.6533333333333, 00:25:03.661 "max_latency_us": 21845.333333333332 00:25:03.661 } 00:25:03.661 ], 00:25:03.661 "core_count": 1 00:25:03.661 } 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3367780 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3367780 ']' 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3367780 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3367780 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3367780' 00:25:03.661 killing process with pid 3367780 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3367780 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3367780 00:25:03.661 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:03.661 [2024-11-06 11:06:37.641118] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:25:03.661 [2024-11-06 11:06:37.641196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3367780 ] 00:25:03.661 [2024-11-06 11:06:37.713945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.661 [2024-11-06 11:06:37.749530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.661 Running I/O for 15 seconds... 00:25:03.661 10961.00 IOPS, 42.82 MiB/s [2024-11-06T10:06:55.083Z] [2024-11-06 11:06:40.242534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.661 [2024-11-06 11:06:40.242570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.661 [2024-11-06 11:06:40.242587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.661 [2024-11-06 11:06:40.242596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.661 [2024-11-06 11:06:40.242606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.242992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.662 [2024-11-06 11:06:40.243275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.662 [2024-11-06 11:06:40.243284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.663 [2024-11-06 11:06:40.243765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.663 [2024-11-06 11:06:40.243940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.663 [2024-11-06 11:06:40.243947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.243956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.243964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.243973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.243980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.243989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.243996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.664 [2024-11-06 11:06:40.244599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.664 [2024-11-06 11:06:40.244606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:40.244623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:40.244639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:40.244657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:40.244673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:40.244689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.665 [2024-11-06 11:06:40.244722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.665 [2024-11-06 11:06:40.244728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95288 len:8 PRP1 0x0 PRP2 0x0 00:25:03.665 [2024-11-06 11:06:40.244738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244789] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:03.665 [2024-11-06 11:06:40.244812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.665 [2024-11-06 11:06:40.244820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.665 [2024-11-06 11:06:40.244836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.665 [2024-11-06 11:06:40.244851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.665 [2024-11-06 11:06:40.244866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:40.244873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:03.665 [2024-11-06 11:06:40.248433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:03.665 [2024-11-06 11:06:40.248459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bd70 (9): Bad file descriptor 00:25:03.665 [2024-11-06 11:06:40.359340] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:03.665 10655.00 IOPS, 41.62 MiB/s [2024-11-06T10:06:55.087Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-06T10:06:55.087Z] 10945.25 IOPS, 42.75 MiB/s [2024-11-06T10:06:55.087Z] [2024-11-06 11:06:43.869996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.665 [2024-11-06 11:06:43.870045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.665 [2024-11-06 11:06:43.870484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.665 [2024-11-06 11:06:43.870493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.870990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.870999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.871006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.871015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.666 [2024-11-06 11:06:43.871022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.666 [2024-11-06 11:06:43.871031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.667 [2024-11-06 11:06:43.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.667 [2024-11-06 11:06:43.871055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.667 [2024-11-06 11:06:43.871071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.667 [2024-11-06 11:06:43.871088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.667 [2024-11-06 11:06:43.871104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.667 [2024-11-06 11:06:43.871120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55680 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55688 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55696 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55704 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55712 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55728 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55736 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55752 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55760 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55768 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55784 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55792 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55800 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55056 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55064 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55072 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55080 len:8 PRP1 0x0 PRP2 0x0 00:25:03.667 [2024-11-06 11:06:43.871669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.667 [2024-11-06 11:06:43.871676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.667 [2024-11-06 11:06:43.871682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.667 [2024-11-06 11:06:43.871688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55088 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55096 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55104 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55808 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55832 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55840 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55848 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55856 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.871976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.871982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55872 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.871990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.871997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55880 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55896 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55904 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55920 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55928 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55952 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55960 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.668 [2024-11-06 11:06:43.872300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.668 [2024-11-06 11:06:43.872306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55968 len:8 PRP1 0x0 PRP2 0x0 00:25:03.668 [2024-11-06 11:06:43.872313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.668 [2024-11-06 11:06:43.872321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.872327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.872333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.872340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.872348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55992 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56000 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56008 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56016 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56024 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56032 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56040 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56048 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56056 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56064 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55112 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55120 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55136 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55144 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55152 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55160 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.669 [2024-11-06 11:06:43.883871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.669 [2024-11-06 11:06:43.883877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55168 len:8 PRP1 0x0 PRP2 0x0 00:25:03.669 [2024-11-06 11:06:43.883885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883926] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:03.669 [2024-11-06 11:06:43.883955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.669 [2024-11-06 11:06:43.883964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.669 [2024-11-06 11:06:43.883981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.883989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.669 [2024-11-06 11:06:43.883996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.884005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.669 [2024-11-06 11:06:43.884012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.669 [2024-11-06 11:06:43.884020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:03.669 [2024-11-06 11:06:43.884061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bd70 (9): Bad file descriptor 00:25:03.669 [2024-11-06 11:06:43.887548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:03.669 [2024-11-06 11:06:44.048679] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:03.669 10639.00 IOPS, 41.56 MiB/s [2024-11-06T10:06:55.092Z] 10781.50 IOPS, 42.12 MiB/s [2024-11-06T10:06:55.092Z] 10917.71 IOPS, 42.65 MiB/s [2024-11-06T10:06:55.092Z] 10991.12 IOPS, 42.93 MiB/s [2024-11-06T10:06:55.092Z] 11059.00 IOPS, 43.20 MiB/s [2024-11-06T10:06:55.092Z] [2024-11-06 11:06:48.248743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.670 [2024-11-06 11:06:48.248786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.248798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.670 [2024-11-06 11:06:48.248807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.248816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.670 [2024-11-06 11:06:48.248825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.248833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.670 [2024-11-06 11:06:48.248847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.248855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228bd70 is same with the state(6) to be set 00:25:03.670 [2024-11-06 11:06:48.252350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.670 [2024-11-06 11:06:48.252828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.670 [2024-11-06 11:06:48.252845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.670 [2024-11-06 11:06:48.252861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.670 [2024-11-06 11:06:48.252871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.252887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.252904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.252920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.252937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.252954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.252970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.252987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.252994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.671 [2024-11-06 11:06:48.253530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.671 [2024-11-06 11:06:48.253540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.672 [2024-11-06 11:06:48.253547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.672 [2024-11-06 11:06:48.253564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.672 [2024-11-06 11:06:48.253580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.672 [2024-11-06 11:06:48.253597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.672 [2024-11-06 11:06:48.253613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.253977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.253985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.253990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.253996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.254016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.254024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.254046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.254052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.254073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.254079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.254100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.254106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.254126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.254133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.254154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.254160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.672 [2024-11-06 11:06:48.254180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.672 [2024-11-06 11:06:48.254187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:25:03.672 [2024-11-06 11:06:48.254194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.672 [2024-11-06 11:06:48.254201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101344 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101352 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101360 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101368 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101376 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101384 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101392 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101400 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101408 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101416 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101424 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.254498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.254504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.254510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101432 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.254517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101440 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101448 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101456 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101464 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101472 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101480 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101488 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101504 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101520 len:8 PRP1 0x0 PRP2 0x0 00:25:03.673 [2024-11-06 11:06:48.266341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.673 [2024-11-06 11:06:48.266349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.673 [2024-11-06 11:06:48.266354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.673 [2024-11-06 11:06:48.266360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.674 [2024-11-06 11:06:48.266590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.674 [2024-11-06 11:06:48.266596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:25:03.674 [2024-11-06 11:06:48.266603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.674 [2024-11-06 11:06:48.266645] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:03.674 [2024-11-06 11:06:48.266656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:03.674 [2024-11-06 11:06:48.266701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bd70 (9): Bad file descriptor 00:25:03.674 [2024-11-06 11:06:48.270202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:03.674 [2024-11-06 11:06:48.301260] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:03.674 11044.80 IOPS, 43.14 MiB/s [2024-11-06T10:06:55.096Z] 11060.36 IOPS, 43.20 MiB/s [2024-11-06T10:06:55.096Z] 11073.92 IOPS, 43.26 MiB/s [2024-11-06T10:06:55.096Z] 11104.54 IOPS, 43.38 MiB/s [2024-11-06T10:06:55.096Z] 11118.64 IOPS, 43.43 MiB/s 00:25:03.674 Latency(us) 00:25:03.674 [2024-11-06T10:06:55.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.674 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:03.674 Verification LBA range: start 0x0 length 0x4000 00:25:03.674 NVMe0n1 : 15.01 11117.14 43.43 805.84 0.00 10708.30 781.65 21845.33 00:25:03.674 [2024-11-06T10:06:55.096Z] =================================================================================================================== 00:25:03.674 [2024-11-06T10:06:55.096Z] Total : 11117.14 43.43 805.84 0.00 10708.30 781.65 21845.33 00:25:03.674 Received shutdown signal, test time was about 15.000000 seconds 00:25:03.674 00:25:03.674 Latency(us) 00:25:03.674 [2024-11-06T10:06:55.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.674 [2024-11-06T10:06:55.096Z] =================================================================================================================== 00:25:03.674 [2024-11-06T10:06:55.096Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3370977 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3370977 /var/tmp/bdevperf.sock 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3370977 ']' 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:03.674 11:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:03.935 11:06:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:03.935 11:06:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:03.935 11:06:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.195 [2024-11-06 11:06:55.435107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:04.195 11:06:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:04.195 [2024-11-06 11:06:55.615523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:04.455 11:06:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:04.716 NVMe0n1 00:25:04.716 11:06:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:04.977 00:25:04.977 11:06:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.547 00:25:05.547 11:06:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:05.547 11:06:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:05.547 11:06:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.808 11:06:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:09.110 11:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.110 11:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:09.110 11:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3372224 00:25:09.110 11:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3372224 00:25:09.110 11:07:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:10.051 { 00:25:10.051 "results": [ 00:25:10.051 { 00:25:10.051 "job": "NVMe0n1", 00:25:10.051 "core_mask": "0x1", 00:25:10.051 "workload": "verify", 00:25:10.051 "status": "finished", 00:25:10.051 "verify_range": { 00:25:10.051 "start": 0, 00:25:10.051 "length": 16384 00:25:10.051 }, 00:25:10.051 "queue_depth": 128, 00:25:10.051 "io_size": 4096, 00:25:10.051 "runtime": 1.010094, 00:25:10.051 "iops": 11542.490104881328, 00:25:10.051 "mibps": 45.08785197219269, 00:25:10.051 "io_failed": 0, 00:25:10.051 "io_timeout": 0, 00:25:10.051 "avg_latency_us": 11033.648745175402, 00:25:10.051 "min_latency_us": 2443.9466666666667, 00:25:10.051 "max_latency_us": 12670.293333333333 00:25:10.051 } 00:25:10.051 ], 00:25:10.051 "core_count": 1 00:25:10.051 } 00:25:10.051 11:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:10.051 [2024-11-06 11:06:54.488222] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:25:10.051 [2024-11-06 11:06:54.488282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370977 ] 00:25:10.051 [2024-11-06 11:06:54.558649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.051 [2024-11-06 11:06:54.594079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.051 [2024-11-06 11:06:57.037434] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:10.051 [2024-11-06 11:06:57.037479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.051 [2024-11-06 11:06:57.037490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.051 [2024-11-06 11:06:57.037500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.051 [2024-11-06 11:06:57.037507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.051 [2024-11-06 11:06:57.037516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.051 [2024-11-06 11:06:57.037523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.051 [2024-11-06 11:06:57.037531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.051 [2024-11-06 11:06:57.037538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.051 [2024-11-06 11:06:57.037545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:10.051 [2024-11-06 11:06:57.037571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:10.051 [2024-11-06 11:06:57.037586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef8d70 (9): Bad file descriptor 00:25:10.051 [2024-11-06 11:06:57.046971] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:10.051 Running I/O for 1 seconds... 00:25:10.051 11523.00 IOPS, 45.01 MiB/s 00:25:10.051 Latency(us) 00:25:10.051 [2024-11-06T10:07:01.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.051 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:10.051 Verification LBA range: start 0x0 length 0x4000 00:25:10.051 NVMe0n1 : 1.01 11542.49 45.09 0.00 0.00 11033.65 2443.95 12670.29 00:25:10.051 [2024-11-06T10:07:01.473Z] =================================================================================================================== 00:25:10.051 [2024-11-06T10:07:01.473Z] Total : 11542.49 45.09 0.00 0.00 11033.65 2443.95 12670.29 00:25:10.051 11:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.051 11:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:10.312 11:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.572 11:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.572 11:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:10.572 11:07:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.832 11:07:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3370977 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3370977 ']' 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3370977 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3370977 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3370977' 00:25:14.129 killing process with pid 3370977 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3370977 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3370977 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:14.129 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:14.390 rmmod nvme_tcp 00:25:14.390 rmmod nvme_fabrics 00:25:14.390 rmmod nvme_keyring 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3367196 ']' 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3367196 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3367196 ']' 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3367196 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3367196 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3367196' 00:25:14.390 killing process with pid 3367196 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3367196 00:25:14.390 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3367196 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.658 11:07:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.680 11:07:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.680 00:25:16.680 real 0m40.104s 00:25:16.680 user 2m4.258s 00:25:16.680 sys 0m8.316s 00:25:16.680 11:07:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:16.680 11:07:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.680 ************************************ 00:25:16.680 END TEST nvmf_failover 00:25:16.680 ************************************ 00:25:16.680 11:07:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:16.680 11:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:16.680 11:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:16.680 11:07:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.680 ************************************ 00:25:16.680 START TEST nvmf_host_discovery 00:25:16.680 ************************************ 00:25:16.680 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:16.943 * Looking for test storage... 00:25:16.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:16.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.943 --rc genhtml_branch_coverage=1 00:25:16.943 --rc genhtml_function_coverage=1 00:25:16.943 --rc genhtml_legend=1 00:25:16.943 --rc geninfo_all_blocks=1 00:25:16.943 --rc geninfo_unexecuted_blocks=1 00:25:16.943 00:25:16.943 ' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:16.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.943 --rc genhtml_branch_coverage=1 00:25:16.943 --rc genhtml_function_coverage=1 00:25:16.943 --rc genhtml_legend=1 00:25:16.943 --rc geninfo_all_blocks=1 00:25:16.943 --rc geninfo_unexecuted_blocks=1 00:25:16.943 00:25:16.943 ' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:16.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.943 --rc genhtml_branch_coverage=1 00:25:16.943 --rc genhtml_function_coverage=1 00:25:16.943 --rc genhtml_legend=1 00:25:16.943 --rc geninfo_all_blocks=1 00:25:16.943 --rc geninfo_unexecuted_blocks=1 00:25:16.943 00:25:16.943 ' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:16.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.943 --rc genhtml_branch_coverage=1 00:25:16.943 --rc genhtml_function_coverage=1 00:25:16.943 --rc genhtml_legend=1 00:25:16.943 --rc geninfo_all_blocks=1 00:25:16.943 --rc geninfo_unexecuted_blocks=1 00:25:16.943 00:25:16.943 ' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.943 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:16.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.944 11:07:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.092 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:25.093 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:25.093 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:25.093 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:25.093 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:25.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:25:25.093 00:25:25.093 --- 10.0.0.2 ping statistics --- 00:25:25.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.093 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:25:25.093 00:25:25.093 --- 10.0.0.1 ping statistics --- 00:25:25.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.093 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3377389 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3377389 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3377389 ']' 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.093 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:25.094 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.094 11:07:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:25.094 [2024-11-06 11:07:15.653045] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:25:25.094 [2024-11-06 11:07:15.653117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.094 [2024-11-06 11:07:15.751331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.094 [2024-11-06 11:07:15.790685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.094 [2024-11-06 11:07:15.790729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.094 [2024-11-06 11:07:15.790743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.094 [2024-11-06 11:07:15.790757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.094 [2024-11-06 11:07:15.790763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.094 [2024-11-06 11:07:15.791448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.094 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.094 [2024-11-06 11:07:16.508580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 [2024-11-06 11:07:16.520898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 null0 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 null1 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3377597 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3377597 /tmp/host.sock 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3377597 ']' 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:25.355 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:25.355 11:07:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 [2024-11-06 11:07:16.618054] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:25:25.355 [2024-11-06 11:07:16.618119] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377597 ] 00:25:25.355 [2024-11-06 11:07:16.697263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.355 [2024-11-06 11:07:16.739160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.300 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.561 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:26.561 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:26.561 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.562 [2024-11-06 11:07:17.735917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:26.562 11:07:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:27.133 [2024-11-06 11:07:18.477731] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:27.133 [2024-11-06 11:07:18.477754] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:27.133 [2024-11-06 11:07:18.477768] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:27.394 [2024-11-06 11:07:18.568044] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:27.394 [2024-11-06 11:07:18.667913] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:27.394 [2024-11-06 11:07:18.668886] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x811780:1 started. 00:25:27.394 [2024-11-06 11:07:18.670505] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:27.394 [2024-11-06 11:07:18.670524] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:27.394 [2024-11-06 11:07:18.676196] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x811780 was disconnected and freed. delete nvme_qpair. 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:27.656 11:07:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.656 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:27.917 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.918 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:27.918 [2024-11-06 11:07:19.327858] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x811b20:1 started. 00:25:28.178 [2024-11-06 11:07:19.337977] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x811b20 was disconnected and freed. delete nvme_qpair. 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:28.178 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.179 [2024-11-06 11:07:19.416626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:28.179 [2024-11-06 11:07:19.417454] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:28.179 [2024-11-06 11:07:19.417474] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.179 [2024-11-06 11:07:19.505184] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.179 [2024-11-06 11:07:19.563950] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:28.179 [2024-11-06 11:07:19.563985] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:28.179 [2024-11-06 11:07:19.563995] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:28.179 [2024-11-06 11:07:19.564000] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:28.179 11:07:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.563 [2024-11-06 11:07:20.688266] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:29.563 [2024-11-06 11:07:20.688289] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:29.563 [2024-11-06 11:07:20.692428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.563 [2024-11-06 11:07:20.692448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.563 [2024-11-06 11:07:20.692459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.563 [2024-11-06 11:07:20.692467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.563 [2024-11-06 11:07:20.692475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.563 [2024-11-06 11:07:20.692482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.563 [2024-11-06 11:07:20.692490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:29.563 [2024-11-06 11:07:20.692497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:29.563 [2024-11-06 11:07:20.692505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.563 [2024-11-06 11:07:20.702442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.563 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.563 [2024-11-06 11:07:20.712478] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:29.563 [2024-11-06 11:07:20.712490] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:29.563 [2024-11-06 11:07:20.712496] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.712501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:29.563 [2024-11-06 11:07:20.712520] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.713062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.563 [2024-11-06 11:07:20.713100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1e10 with addr=10.0.0.2, port=4420 00:25:29.563 [2024-11-06 11:07:20.713111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.563 [2024-11-06 11:07:20.713144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.563 [2024-11-06 11:07:20.713158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:29.563 [2024-11-06 11:07:20.713165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:29.563 [2024-11-06 11:07:20.713174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:29.563 [2024-11-06 11:07:20.713182] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:29.563 [2024-11-06 11:07:20.713187] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:29.563 [2024-11-06 11:07:20.713192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:29.563 [2024-11-06 11:07:20.722552] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:29.563 [2024-11-06 11:07:20.722566] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:29.563 [2024-11-06 11:07:20.722571] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.722576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:29.563 [2024-11-06 11:07:20.722592] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.723037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.563 [2024-11-06 11:07:20.723076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1e10 with addr=10.0.0.2, port=4420 00:25:29.563 [2024-11-06 11:07:20.723088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.563 [2024-11-06 11:07:20.723107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.563 [2024-11-06 11:07:20.723120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:29.563 [2024-11-06 11:07:20.723127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:29.563 [2024-11-06 11:07:20.723135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:29.563 [2024-11-06 11:07:20.723142] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:29.563 [2024-11-06 11:07:20.723148] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:29.563 [2024-11-06 11:07:20.723160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:29.563 [2024-11-06 11:07:20.732628] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:29.563 [2024-11-06 11:07:20.732644] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:29.563 [2024-11-06 11:07:20.732649] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.732654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:29.563 [2024-11-06 11:07:20.732671] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.733024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.563 [2024-11-06 11:07:20.733062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1e10 with addr=10.0.0.2, port=4420 00:25:29.563 [2024-11-06 11:07:20.733075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.563 [2024-11-06 11:07:20.733096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.563 [2024-11-06 11:07:20.733109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:29.563 [2024-11-06 11:07:20.733118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:29.563 [2024-11-06 11:07:20.733128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:29.563 [2024-11-06 11:07:20.733136] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:29.563 [2024-11-06 11:07:20.733141] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:29.563 [2024-11-06 11:07:20.733146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:29.563 [2024-11-06 11:07:20.742704] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:29.563 [2024-11-06 11:07:20.742720] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:29.563 [2024-11-06 11:07:20.742725] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.742730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:29.563 [2024-11-06 11:07:20.742751] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:29.563 [2024-11-06 11:07:20.743038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.563 [2024-11-06 11:07:20.743052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1e10 with addr=10.0.0.2, port=4420 00:25:29.563 [2024-11-06 11:07:20.743060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.563 [2024-11-06 11:07:20.743072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.563 [2024-11-06 11:07:20.743083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:29.563 [2024-11-06 11:07:20.743090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:29.563 [2024-11-06 11:07:20.743097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:29.563 [2024-11-06 11:07:20.743104] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:29.564 [2024-11-06 11:07:20.743113] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:29.564 [2024-11-06 11:07:20.743118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.564 [2024-11-06 11:07:20.752783] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:29.564 [2024-11-06 11:07:20.752801] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:29.564 [2024-11-06 11:07:20.752808] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:29.564 [2024-11-06 11:07:20.752817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:29.564 [2024-11-06 11:07:20.752833] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.564 [2024-11-06 11:07:20.753133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.564 [2024-11-06 11:07:20.753148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1e10 with addr=10.0.0.2, port=4420 00:25:29.564 [2024-11-06 11:07:20.753156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.564 [2024-11-06 11:07:20.753168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.564 [2024-11-06 11:07:20.753181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:29.564 [2024-11-06 11:07:20.753188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:29.564 [2024-11-06 11:07:20.753196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:29.564 [2024-11-06 11:07:20.753202] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:29.564 [2024-11-06 11:07:20.753207] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:29.564 [2024-11-06 11:07:20.753212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:29.564 [2024-11-06 11:07:20.762864] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:29.564 [2024-11-06 11:07:20.762882] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:29.564 [2024-11-06 11:07:20.762887] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:29.564 [2024-11-06 11:07:20.762891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:29.564 [2024-11-06 11:07:20.762907] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:29.564 [2024-11-06 11:07:20.763206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.564 [2024-11-06 11:07:20.763219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1e10 with addr=10.0.0.2, port=4420 00:25:29.564 [2024-11-06 11:07:20.763226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.564 [2024-11-06 11:07:20.763237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.564 [2024-11-06 11:07:20.763248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:29.564 [2024-11-06 11:07:20.763254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:29.564 [2024-11-06 11:07:20.763261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:29.564 [2024-11-06 11:07:20.763268] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:29.564 [2024-11-06 11:07:20.763272] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:29.564 [2024-11-06 11:07:20.763277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:29.564 [2024-11-06 11:07:20.772939] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:29.564 [2024-11-06 11:07:20.772951] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:29.564 [2024-11-06 11:07:20.772955] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:29.564 [2024-11-06 11:07:20.772960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:29.564 [2024-11-06 11:07:20.772974] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:29.564 [2024-11-06 11:07:20.773264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.564 [2024-11-06 11:07:20.773275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e1e10 with addr=10.0.0.2, port=4420 00:25:29.564 [2024-11-06 11:07:20.773282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1e10 is same with the state(6) to be set 00:25:29.564 [2024-11-06 11:07:20.773293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e1e10 (9): Bad file descriptor 00:25:29.564 [2024-11-06 11:07:20.773304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:29.564 [2024-11-06 11:07:20.773310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:29.564 [2024-11-06 11:07:20.773317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:29.564 [2024-11-06 11:07:20.773323] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:29.564 [2024-11-06 11:07:20.773328] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:29.564 [2024-11-06 11:07:20.773332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:29.564 [2024-11-06 11:07:20.777380] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:29.564 [2024-11-06 11:07:20.777398] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.564 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.825 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.825 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.765 [2024-11-06 11:07:22.137949] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:30.765 [2024-11-06 11:07:22.137967] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:30.765 [2024-11-06 11:07:22.137980] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.026 [2024-11-06 11:07:22.224244] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:31.026 [2024-11-06 11:07:22.329060] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:31.026 [2024-11-06 11:07:22.329818] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x7f3050:1 started. 00:25:31.026 [2024-11-06 11:07:22.331633] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:31.026 [2024-11-06 11:07:22.331660] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.026 [2024-11-06 11:07:22.335450] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x7f3050 was disconnected and freed. delete nvme_qpair. 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.026 request: 00:25:31.026 { 00:25:31.026 "name": "nvme", 00:25:31.026 "trtype": "tcp", 00:25:31.026 "traddr": "10.0.0.2", 00:25:31.026 "adrfam": "ipv4", 00:25:31.026 "trsvcid": "8009", 00:25:31.026 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:31.026 "wait_for_attach": true, 00:25:31.026 "method": "bdev_nvme_start_discovery", 00:25:31.026 "req_id": 1 00:25:31.026 } 00:25:31.026 Got JSON-RPC error response 00:25:31.026 response: 00:25:31.026 { 00:25:31.026 "code": -17, 00:25:31.026 "message": "File exists" 00:25:31.026 } 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.026 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.287 request: 00:25:31.287 { 00:25:31.287 "name": "nvme_second", 00:25:31.287 "trtype": "tcp", 00:25:31.287 "traddr": "10.0.0.2", 00:25:31.287 "adrfam": "ipv4", 00:25:31.287 "trsvcid": "8009", 00:25:31.287 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:31.287 "wait_for_attach": true, 00:25:31.287 "method": "bdev_nvme_start_discovery", 00:25:31.287 "req_id": 1 00:25:31.287 } 00:25:31.287 Got JSON-RPC error response 00:25:31.287 response: 00:25:31.287 { 00:25:31.287 "code": -17, 00:25:31.287 "message": "File exists" 00:25:31.287 } 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.287 11:07:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.230 [2024-11-06 11:07:23.583088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.230 [2024-11-06 11:07:23.583115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f80 with addr=10.0.0.2, port=8010 00:25:32.230 [2024-11-06 11:07:23.583129] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:32.230 [2024-11-06 11:07:23.583136] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:32.230 [2024-11-06 11:07:23.583143] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:33.171 [2024-11-06 11:07:24.585474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.171 [2024-11-06 11:07:24.585496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f80 with addr=10.0.0.2, port=8010 00:25:33.171 [2024-11-06 11:07:24.585507] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:33.171 [2024-11-06 11:07:24.585513] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:33.171 [2024-11-06 11:07:24.585520] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:34.558 [2024-11-06 11:07:25.587449] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:34.558 request: 00:25:34.558 { 00:25:34.558 "name": "nvme_second", 00:25:34.558 "trtype": "tcp", 00:25:34.558 "traddr": "10.0.0.2", 00:25:34.558 "adrfam": "ipv4", 00:25:34.558 "trsvcid": "8010", 00:25:34.558 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:34.558 "wait_for_attach": false, 00:25:34.558 "attach_timeout_ms": 3000, 00:25:34.558 "method": "bdev_nvme_start_discovery", 00:25:34.558 "req_id": 1 00:25:34.558 } 00:25:34.558 Got JSON-RPC error response 00:25:34.558 response: 00:25:34.558 { 00:25:34.558 "code": -110, 00:25:34.558 "message": "Connection timed out" 00:25:34.558 } 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3377597 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.558 rmmod nvme_tcp 00:25:34.558 rmmod nvme_fabrics 00:25:34.558 rmmod nvme_keyring 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3377389 ']' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3377389 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3377389 ']' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3377389 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3377389 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3377389' 00:25:34.558 killing process with pid 3377389 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3377389 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3377389 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.558 11:07:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.105 11:07:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:37.105 00:25:37.105 real 0m19.893s 00:25:37.105 user 0m23.158s 00:25:37.105 sys 0m6.968s 00:25:37.106 11:07:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:37.106 11:07:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.106 ************************************ 00:25:37.106 END TEST nvmf_host_discovery 00:25:37.106 ************************************ 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.106 ************************************ 00:25:37.106 START TEST nvmf_host_multipath_status 00:25:37.106 ************************************ 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:37.106 * Looking for test storage... 00:25:37.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.106 --rc genhtml_branch_coverage=1 00:25:37.106 --rc genhtml_function_coverage=1 00:25:37.106 --rc genhtml_legend=1 00:25:37.106 --rc geninfo_all_blocks=1 00:25:37.106 --rc geninfo_unexecuted_blocks=1 00:25:37.106 00:25:37.106 ' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.106 --rc genhtml_branch_coverage=1 00:25:37.106 --rc genhtml_function_coverage=1 00:25:37.106 --rc genhtml_legend=1 00:25:37.106 --rc geninfo_all_blocks=1 00:25:37.106 --rc geninfo_unexecuted_blocks=1 00:25:37.106 00:25:37.106 ' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.106 --rc genhtml_branch_coverage=1 00:25:37.106 --rc genhtml_function_coverage=1 00:25:37.106 --rc genhtml_legend=1 00:25:37.106 --rc geninfo_all_blocks=1 00:25:37.106 --rc geninfo_unexecuted_blocks=1 00:25:37.106 00:25:37.106 ' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.106 --rc genhtml_branch_coverage=1 00:25:37.106 --rc genhtml_function_coverage=1 00:25:37.106 --rc genhtml_legend=1 00:25:37.106 --rc geninfo_all_blocks=1 00:25:37.106 --rc geninfo_unexecuted_blocks=1 00:25:37.106 00:25:37.106 ' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.106 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:37.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:37.107 11:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:45.254 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.254 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.254 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.254 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:45.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:45.255 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:45.255 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:45.255 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:25:45.255 00:25:45.255 --- 10.0.0.2 ping statistics --- 00:25:45.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.255 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:45.255 00:25:45.255 --- 10.0.0.1 ping statistics --- 00:25:45.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.255 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:45.255 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3383747 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3383747 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3383747 ']' 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:45.256 11:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:45.256 [2024-11-06 11:07:35.580240] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:25:45.256 [2024-11-06 11:07:35.580290] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.256 [2024-11-06 11:07:35.658339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:45.256 [2024-11-06 11:07:35.693465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.256 [2024-11-06 11:07:35.693500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.256 [2024-11-06 11:07:35.693508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.256 [2024-11-06 11:07:35.693515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.256 [2024-11-06 11:07:35.693520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.256 [2024-11-06 11:07:35.694801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.256 [2024-11-06 11:07:35.694802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3383747 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:45.256 [2024-11-06 11:07:36.560116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.256 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:45.518 Malloc0 00:25:45.518 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:45.518 11:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.779 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.040 [2024-11-06 11:07:37.240058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:46.040 [2024-11-06 11:07:37.408427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3384127 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3384127 /var/tmp/bdevperf.sock 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3384127 ']' 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.040 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.301 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:46.301 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:46.301 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:46.561 11:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:47.132 Nvme0n1 00:25:47.132 11:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:47.392 Nvme0n1 00:25:47.393 11:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:47.393 11:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:49.305 11:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:49.305 11:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:49.566 11:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:49.566 11:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:50.948 11:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:50.948 11:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:50.948 11:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.948 11:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.948 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.209 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.209 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.209 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.209 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.469 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.469 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:51.469 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.469 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.729 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.729 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.729 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.729 11:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.729 11:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.729 11:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:51.729 11:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.990 11:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.250 11:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:53.192 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:53.192 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:53.192 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.192 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.452 11:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.712 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.712 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.712 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.712 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.972 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.233 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.233 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:54.233 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.494 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:54.494 11:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:55.877 11:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:55.877 11:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.877 11:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.877 11:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.877 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.138 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.138 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.138 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.138 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.398 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.398 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.398 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.398 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.658 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.658 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.658 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.658 11:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.658 11:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.658 11:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:56.658 11:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.918 11:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:57.178 11:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:58.119 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:58.119 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.119 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.119 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.379 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.380 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.380 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.380 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.641 11:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.901 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.901 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.901 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.901 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.162 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.162 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:59.162 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.162 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.162 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.162 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:59.162 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:59.423 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:59.684 11:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:00.626 11:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:00.626 11:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:00.626 11:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.626 11:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.887 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.149 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.149 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.149 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.149 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.411 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.411 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:01.411 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.411 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.672 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.672 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:01.672 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.672 11:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.672 11:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.672 11:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:01.672 11:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:01.932 11:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:02.193 11:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:03.137 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:03.137 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.137 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.137 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.398 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.659 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.659 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.659 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.659 11:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.919 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.180 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.180 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:04.441 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:04.441 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:04.441 11:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.701 11:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:05.643 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:05.643 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.643 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.643 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.904 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.904 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:05.904 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.904 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.165 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.165 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.165 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.165 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.431 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.782 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.782 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.782 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.782 11:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.782 11:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.782 11:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:06.782 11:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:07.147 11:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.147 11:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:08.098 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:08.098 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:08.098 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.098 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.358 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.358 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.358 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.358 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.619 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.619 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.619 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.619 11:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.881 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.141 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.141 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.141 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.141 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.401 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.401 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:09.401 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.401 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:09.662 11:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:10.604 11:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:10.604 11:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.604 11:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.604 11:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.864 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.864 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.864 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.864 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.125 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.125 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.125 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.125 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.386 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.646 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.646 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.646 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.646 11:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.907 11:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.907 11:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:11.907 11:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.907 11:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:12.168 11:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:13.109 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:13.109 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.109 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.109 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.370 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.370 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:13.370 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.370 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.630 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.630 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.630 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.630 11:08:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.630 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.630 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.630 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.630 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.891 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.891 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.891 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.891 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.151 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.151 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:14.151 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.151 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.414 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3384127 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3384127 ']' 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3384127 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3384127 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3384127' 00:26:14.415 killing process with pid 3384127 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3384127 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3384127 00:26:14.415 { 00:26:14.415 "results": [ 00:26:14.415 { 00:26:14.415 "job": "Nvme0n1", 00:26:14.415 "core_mask": "0x4", 00:26:14.415 "workload": "verify", 00:26:14.415 "status": "terminated", 00:26:14.415 "verify_range": { 00:26:14.415 "start": 0, 00:26:14.415 "length": 16384 00:26:14.415 }, 00:26:14.415 "queue_depth": 128, 00:26:14.415 "io_size": 4096, 00:26:14.415 "runtime": 26.936483, 00:26:14.415 "iops": 10738.224437095221, 00:26:14.415 "mibps": 41.94618920740321, 00:26:14.415 "io_failed": 0, 00:26:14.415 "io_timeout": 0, 00:26:14.415 "avg_latency_us": 11901.877363941227, 00:26:14.415 "min_latency_us": 358.4, 00:26:14.415 "max_latency_us": 3019898.88 00:26:14.415 } 00:26:14.415 ], 00:26:14.415 "core_count": 1 00:26:14.415 } 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3384127 00:26:14.415 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:14.415 [2024-11-06 11:07:37.473716] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:26:14.415 [2024-11-06 11:07:37.473779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384127 ] 00:26:14.415 [2024-11-06 11:07:37.532003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.415 [2024-11-06 11:07:37.560657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.415 Running I/O for 90 seconds... 00:26:14.415 9459.00 IOPS, 36.95 MiB/s [2024-11-06T10:08:05.837Z] 9540.50 IOPS, 37.27 MiB/s [2024-11-06T10:08:05.837Z] 9591.67 IOPS, 37.47 MiB/s [2024-11-06T10:08:05.837Z] 9599.75 IOPS, 37.50 MiB/s [2024-11-06T10:08:05.837Z] 9843.60 IOPS, 38.45 MiB/s [2024-11-06T10:08:05.837Z] 10349.83 IOPS, 40.43 MiB/s [2024-11-06T10:08:05.837Z] 10690.14 IOPS, 41.76 MiB/s [2024-11-06T10:08:05.837Z] 10662.38 IOPS, 41.65 MiB/s [2024-11-06T10:08:05.837Z] 10544.22 IOPS, 41.19 MiB/s [2024-11-06T10:08:05.837Z] 10454.50 IOPS, 40.84 MiB/s [2024-11-06T10:08:05.837Z] 10382.36 IOPS, 40.56 MiB/s [2024-11-06T10:08:05.837Z] 10322.83 IOPS, 40.32 MiB/s [2024-11-06T10:08:05.837Z] [2024-11-06 11:07:50.718173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.718618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.718623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.415 [2024-11-06 11:07:50.720096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.415 [2024-11-06 11:07:50.720114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.415 [2024-11-06 11:07:50.720130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.415 [2024-11-06 11:07:50.720146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.415 [2024-11-06 11:07:50.720162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.415 [2024-11-06 11:07:50.720181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.415 [2024-11-06 11:07:50.720197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:14.415 [2024-11-06 11:07:50.720208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.415 [2024-11-06 11:07:50.720213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.416 [2024-11-06 11:07:50.720358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.416 [2024-11-06 11:07:50.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.416 [2024-11-06 11:07:50.720425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.416 [2024-11-06 11:07:50.720442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.416 [2024-11-06 11:07:50.720459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.416 [2024-11-06 11:07:50.720477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.416 [2024-11-06 11:07:50.720494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.416 [2024-11-06 11:07:50.720843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:14.416 [2024-11-06 11:07:50.720855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.720990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.720996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.417 [2024-11-06 11:07:50.721922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.721955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.721960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.722005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.722012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.722027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.722033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.722047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.722053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.722067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.722072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.722087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.417 [2024-11-06 11:07:50.722092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:14.417 [2024-11-06 11:07:50.722109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:14.418 [2024-11-06 11:07:50.722744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.418 [2024-11-06 11:07:50.722754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:14.418 9539.15 IOPS, 37.26 MiB/s [2024-11-06T10:08:05.840Z] 8857.79 IOPS, 34.60 MiB/s [2024-11-06T10:08:05.840Z] 8267.27 IOPS, 32.29 MiB/s [2024-11-06T10:08:05.840Z] 8547.00 IOPS, 33.39 MiB/s [2024-11-06T10:08:05.840Z] 8798.47 IOPS, 34.37 MiB/s [2024-11-06T10:08:05.840Z] 9201.89 IOPS, 35.94 MiB/s [2024-11-06T10:08:05.840Z] 9601.42 IOPS, 37.51 MiB/s [2024-11-06T10:08:05.841Z] 9896.90 IOPS, 38.66 MiB/s [2024-11-06T10:08:05.841Z] 10036.86 IOPS, 39.21 MiB/s [2024-11-06T10:08:05.841Z] 10166.41 IOPS, 39.71 MiB/s [2024-11-06T10:08:05.841Z] 10403.78 IOPS, 40.64 MiB/s [2024-11-06T10:08:05.841Z] 10669.92 IOPS, 41.68 MiB/s [2024-11-06T10:08:05.841Z] [2024-11-06 11:08:03.416605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.419 [2024-11-06 11:08:03.416641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:14.419 [2024-11-06 11:08:03.416672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.419 [2024-11-06 11:08:03.416679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:14.419 [2024-11-06 11:08:03.416690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.419 [2024-11-06 11:08:03.416699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:14.419 [2024-11-06 11:08:03.416976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.419 [2024-11-06 11:08:03.416986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:14.419 [2024-11-06 11:08:03.416997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.419 [2024-11-06 11:08:03.417003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:14.419 [2024-11-06 11:08:03.417013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.419 [2024-11-06 11:08:03.417019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:14.419 [2024-11-06 11:08:03.417029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.419 [2024-11-06 11:08:03.417035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:14.419 10826.16 IOPS, 42.29 MiB/s [2024-11-06T10:08:05.841Z] 10780.27 IOPS, 42.11 MiB/s [2024-11-06T10:08:05.841Z] Received shutdown signal, test time was about 26.937093 seconds 00:26:14.419 00:26:14.419 Latency(us) 00:26:14.419 [2024-11-06T10:08:05.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.419 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:14.419 Verification LBA range: start 0x0 length 0x4000 00:26:14.419 Nvme0n1 : 26.94 10738.22 41.95 0.00 0.00 11901.88 358.40 3019898.88 00:26:14.419 [2024-11-06T10:08:05.841Z] =================================================================================================================== 00:26:14.419 [2024-11-06T10:08:05.841Z] Total : 10738.22 41.95 0.00 0.00 11901.88 358.40 3019898.88 00:26:14.419 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.679 11:08:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.679 rmmod nvme_tcp 00:26:14.679 rmmod nvme_fabrics 00:26:14.679 rmmod nvme_keyring 00:26:14.679 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3383747 ']' 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3383747 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3383747 ']' 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3383747 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3383747 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3383747' 00:26:14.680 killing process with pid 3383747 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3383747 00:26:14.680 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3383747 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.944 11:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.489 00:26:17.489 real 0m40.256s 00:26:17.489 user 1m44.397s 00:26:17.489 sys 0m11.382s 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:17.489 ************************************ 00:26:17.489 END TEST nvmf_host_multipath_status 00:26:17.489 ************************************ 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.489 ************************************ 00:26:17.489 START TEST nvmf_discovery_remove_ifc 00:26:17.489 ************************************ 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:17.489 * Looking for test storage... 00:26:17.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.489 --rc genhtml_branch_coverage=1 00:26:17.489 --rc genhtml_function_coverage=1 00:26:17.489 --rc genhtml_legend=1 00:26:17.489 --rc geninfo_all_blocks=1 00:26:17.489 --rc geninfo_unexecuted_blocks=1 00:26:17.489 00:26:17.489 ' 00:26:17.489 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.489 --rc genhtml_branch_coverage=1 00:26:17.489 --rc genhtml_function_coverage=1 00:26:17.489 --rc genhtml_legend=1 00:26:17.489 --rc geninfo_all_blocks=1 00:26:17.489 --rc geninfo_unexecuted_blocks=1 00:26:17.489 00:26:17.489 ' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:17.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.490 --rc genhtml_branch_coverage=1 00:26:17.490 --rc genhtml_function_coverage=1 00:26:17.490 --rc genhtml_legend=1 00:26:17.490 --rc geninfo_all_blocks=1 00:26:17.490 --rc geninfo_unexecuted_blocks=1 00:26:17.490 00:26:17.490 ' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:17.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.490 --rc genhtml_branch_coverage=1 00:26:17.490 --rc genhtml_function_coverage=1 00:26:17.490 --rc genhtml_legend=1 00:26:17.490 --rc geninfo_all_blocks=1 00:26:17.490 --rc geninfo_unexecuted_blocks=1 00:26:17.490 00:26:17.490 ' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.490 11:08:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.632 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:25.633 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:25.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:25.633 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:25.633 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:26:25.633 00:26:25.633 --- 10.0.0.2 ping statistics --- 00:26:25.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.633 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:26:25.633 00:26:25.633 --- 10.0.0.1 ping statistics --- 00:26:25.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.633 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3393985 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3393985 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3393985 ']' 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:25.633 11:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.633 [2024-11-06 11:08:16.001310] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:26:25.633 [2024-11-06 11:08:16.001380] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.633 [2024-11-06 11:08:16.102996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.633 [2024-11-06 11:08:16.153355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.633 [2024-11-06 11:08:16.153407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.633 [2024-11-06 11:08:16.153416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.633 [2024-11-06 11:08:16.153423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.633 [2024-11-06 11:08:16.153430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.633 [2024-11-06 11:08:16.154184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.633 [2024-11-06 11:08:16.887873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.633 [2024-11-06 11:08:16.896164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:25.633 null0 00:26:25.633 [2024-11-06 11:08:16.928089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.633 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3394038 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3394038 /tmp/host.sock 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3394038 ']' 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:25.634 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:25.634 11:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.634 [2024-11-06 11:08:17.014278] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:26:25.634 [2024-11-06 11:08:17.014344] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394038 ] 00:26:25.894 [2024-11-06 11:08:17.091074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.894 [2024-11-06 11:08:17.134021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.464 11:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.849 [2024-11-06 11:08:18.928708] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:27.849 [2024-11-06 11:08:18.928728] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:27.849 [2024-11-06 11:08:18.928741] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:27.849 [2024-11-06 11:08:19.059195] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:27.849 [2024-11-06 11:08:19.240226] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:27.849 [2024-11-06 11:08:19.241318] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5ab3f0:1 started. 00:26:27.849 [2024-11-06 11:08:19.242887] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:27.849 [2024-11-06 11:08:19.242932] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:27.849 [2024-11-06 11:08:19.242952] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:27.849 [2024-11-06 11:08:19.242966] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:27.849 [2024-11-06 11:08:19.242990] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:27.849 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.849 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:27.849 [2024-11-06 11:08:19.246091] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5ab3f0 was disconnected and freed. delete nvme_qpair. 00:26:27.850 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:27.850 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.850 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:27.850 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.850 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:27.850 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.850 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:28.111 11:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.053 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.053 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.053 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.053 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.053 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.053 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.053 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.313 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.313 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:29.313 11:08:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.254 11:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.196 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.456 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.457 11:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.398 11:08:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.341 [2024-11-06 11:08:24.683935] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:33.342 [2024-11-06 11:08:24.683984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.342 [2024-11-06 11:08:24.683996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.342 [2024-11-06 11:08:24.684008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.342 [2024-11-06 11:08:24.684020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.342 [2024-11-06 11:08:24.684028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.342 [2024-11-06 11:08:24.684035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.342 [2024-11-06 11:08:24.684043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.342 [2024-11-06 11:08:24.684051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.342 [2024-11-06 11:08:24.684059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.342 [2024-11-06 11:08:24.684066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.342 [2024-11-06 11:08:24.684073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587c00 is same with the state(6) to be set 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.342 [2024-11-06 11:08:24.693956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x587c00 (9): Bad file descriptor 00:26:33.342 [2024-11-06 11:08:24.703995] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:33.342 [2024-11-06 11:08:24.704011] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:33.342 [2024-11-06 11:08:24.704016] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:33.342 [2024-11-06 11:08:24.704022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:33.342 [2024-11-06 11:08:24.704048] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.342 11:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.724 [2024-11-06 11:08:25.716771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:34.724 [2024-11-06 11:08:25.716810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x587c00 with addr=10.0.0.2, port=4420 00:26:34.724 [2024-11-06 11:08:25.716821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x587c00 is same with the state(6) to be set 00:26:34.724 [2024-11-06 11:08:25.716843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x587c00 (9): Bad file descriptor 00:26:34.724 [2024-11-06 11:08:25.716883] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:34.724 [2024-11-06 11:08:25.716904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.724 [2024-11-06 11:08:25.716917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.724 [2024-11-06 11:08:25.716926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.724 [2024-11-06 11:08:25.716933] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.724 [2024-11-06 11:08:25.716939] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.724 [2024-11-06 11:08:25.716943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.724 [2024-11-06 11:08:25.716951] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.724 [2024-11-06 11:08:25.716956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.724 11:08:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.665 [2024-11-06 11:08:26.719329] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:35.665 [2024-11-06 11:08:26.719349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:35.665 [2024-11-06 11:08:26.719360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:35.665 [2024-11-06 11:08:26.719368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:35.665 [2024-11-06 11:08:26.719376] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:35.665 [2024-11-06 11:08:26.719383] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:35.665 [2024-11-06 11:08:26.719388] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:35.665 [2024-11-06 11:08:26.719393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:35.665 [2024-11-06 11:08:26.719413] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:35.665 [2024-11-06 11:08:26.719433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.665 [2024-11-06 11:08:26.719443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.665 [2024-11-06 11:08:26.719453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.665 [2024-11-06 11:08:26.719460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.665 [2024-11-06 11:08:26.719469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.665 [2024-11-06 11:08:26.719480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.665 [2024-11-06 11:08:26.719489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.665 [2024-11-06 11:08:26.719496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.665 [2024-11-06 11:08:26.719504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.665 [2024-11-06 11:08:26.719512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.665 [2024-11-06 11:08:26.719519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:35.665 [2024-11-06 11:08:26.719545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577340 (9): Bad file descriptor 00:26:35.665 [2024-11-06 11:08:26.720543] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:35.665 [2024-11-06 11:08:26.720555] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:35.665 11:08:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.607 11:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.607 11:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.607 11:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.607 11:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.607 11:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.607 11:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.607 11:08:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.607 11:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.867 11:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:36.867 11:08:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.437 [2024-11-06 11:08:28.771933] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:37.437 [2024-11-06 11:08:28.771949] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:37.437 [2024-11-06 11:08:28.771962] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:37.697 [2024-11-06 11:08:28.860245] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:37.697 [2024-11-06 11:08:28.959081] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:37.697 [2024-11-06 11:08:28.959876] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x57c130:1 started. 00:26:37.697 [2024-11-06 11:08:28.961100] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:37.697 [2024-11-06 11:08:28.961132] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:37.697 [2024-11-06 11:08:28.961152] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:37.697 [2024-11-06 11:08:28.961166] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:37.697 [2024-11-06 11:08:28.961173] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:37.697 [2024-11-06 11:08:28.969887] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x57c130 was disconnected and freed. delete nvme_qpair. 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3394038 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3394038 ']' 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3394038 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:37.697 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3394038 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3394038' 00:26:37.957 killing process with pid 3394038 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3394038 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3394038 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:37.957 rmmod nvme_tcp 00:26:37.957 rmmod nvme_fabrics 00:26:37.957 rmmod nvme_keyring 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:37.957 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3393985 ']' 00:26:37.958 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3393985 00:26:37.958 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3393985 ']' 00:26:37.958 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3393985 00:26:37.958 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:37.958 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:37.958 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3393985 00:26:38.218 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:38.218 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3393985' 00:26:38.219 killing process with pid 3393985 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3393985 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3393985 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.219 11:08:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.761 00:26:40.761 real 0m23.212s 00:26:40.761 user 0m27.418s 00:26:40.761 sys 0m6.993s 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.761 ************************************ 00:26:40.761 END TEST nvmf_discovery_remove_ifc 00:26:40.761 ************************************ 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.761 ************************************ 00:26:40.761 START TEST nvmf_identify_kernel_target 00:26:40.761 ************************************ 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:40.761 * Looking for test storage... 00:26:40.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:40.761 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:40.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.762 --rc genhtml_branch_coverage=1 00:26:40.762 --rc genhtml_function_coverage=1 00:26:40.762 --rc genhtml_legend=1 00:26:40.762 --rc geninfo_all_blocks=1 00:26:40.762 --rc geninfo_unexecuted_blocks=1 00:26:40.762 00:26:40.762 ' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:40.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.762 --rc genhtml_branch_coverage=1 00:26:40.762 --rc genhtml_function_coverage=1 00:26:40.762 --rc genhtml_legend=1 00:26:40.762 --rc geninfo_all_blocks=1 00:26:40.762 --rc geninfo_unexecuted_blocks=1 00:26:40.762 00:26:40.762 ' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:40.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.762 --rc genhtml_branch_coverage=1 00:26:40.762 --rc genhtml_function_coverage=1 00:26:40.762 --rc genhtml_legend=1 00:26:40.762 --rc geninfo_all_blocks=1 00:26:40.762 --rc geninfo_unexecuted_blocks=1 00:26:40.762 00:26:40.762 ' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:40.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.762 --rc genhtml_branch_coverage=1 00:26:40.762 --rc genhtml_function_coverage=1 00:26:40.762 --rc genhtml_legend=1 00:26:40.762 --rc geninfo_all_blocks=1 00:26:40.762 --rc geninfo_unexecuted_blocks=1 00:26:40.762 00:26:40.762 ' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.762 11:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:47.353 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:47.353 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:47.353 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:47.353 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.353 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.354 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.615 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.615 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.615 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:47.615 11:08:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.615 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.615 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.615 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:47.615 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:47.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:47.615 00:26:47.615 --- 10.0.0.2 ping statistics --- 00:26:47.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.615 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:47.615 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:26:47.877 00:26:47.877 --- 10.0.0.1 ping statistics --- 00:26:47.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.877 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:47.877 11:08:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:51.185 Waiting for block devices as requested 00:26:51.185 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:51.185 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:51.185 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:51.185 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:51.185 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:51.445 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:51.445 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:51.445 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:51.706 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:51.706 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:51.966 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:51.966 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:51.966 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:52.227 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:52.227 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:52.227 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:52.227 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:52.488 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:52.749 No valid GPT data, bailing 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:52.750 11:08:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:52.750 00:26:52.750 Discovery Log Number of Records 2, Generation counter 2 00:26:52.750 =====Discovery Log Entry 0====== 00:26:52.750 trtype: tcp 00:26:52.750 adrfam: ipv4 00:26:52.750 subtype: current discovery subsystem 00:26:52.750 treq: not specified, sq flow control disable supported 00:26:52.750 portid: 1 00:26:52.750 trsvcid: 4420 00:26:52.750 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:52.750 traddr: 10.0.0.1 00:26:52.750 eflags: none 00:26:52.750 sectype: none 00:26:52.750 =====Discovery Log Entry 1====== 00:26:52.750 trtype: tcp 00:26:52.750 adrfam: ipv4 00:26:52.750 subtype: nvme subsystem 00:26:52.750 treq: not specified, sq flow control disable supported 00:26:52.750 portid: 1 00:26:52.750 trsvcid: 4420 00:26:52.750 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:52.750 traddr: 10.0.0.1 00:26:52.750 eflags: none 00:26:52.750 sectype: none 00:26:52.750 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:52.750 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:52.750 ===================================================== 00:26:52.750 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:52.750 ===================================================== 00:26:52.750 Controller Capabilities/Features 00:26:52.750 ================================ 00:26:52.750 Vendor ID: 0000 00:26:52.750 Subsystem Vendor ID: 0000 00:26:52.750 Serial Number: 6d8a2a3306f20e2ac671 00:26:52.750 Model Number: Linux 00:26:52.750 Firmware Version: 6.8.9-20 00:26:52.750 Recommended Arb Burst: 0 00:26:52.750 IEEE OUI Identifier: 00 00 00 00:26:52.750 Multi-path I/O 00:26:52.750 May have multiple subsystem ports: No 00:26:52.750 May have multiple controllers: No 00:26:52.750 Associated with SR-IOV VF: No 00:26:52.750 Max Data Transfer Size: Unlimited 00:26:52.750 Max Number of Namespaces: 0 00:26:52.750 Max Number of I/O Queues: 1024 00:26:52.750 NVMe Specification Version (VS): 1.3 00:26:52.750 NVMe Specification Version (Identify): 1.3 00:26:52.750 Maximum Queue Entries: 1024 00:26:52.750 Contiguous Queues Required: No 00:26:52.750 Arbitration Mechanisms Supported 00:26:52.750 Weighted Round Robin: Not Supported 00:26:52.750 Vendor Specific: Not Supported 00:26:52.750 Reset Timeout: 7500 ms 00:26:52.750 Doorbell Stride: 4 bytes 00:26:52.750 NVM Subsystem Reset: Not Supported 00:26:52.750 Command Sets Supported 00:26:52.750 NVM Command Set: Supported 00:26:52.750 Boot Partition: Not Supported 00:26:52.750 Memory Page Size Minimum: 4096 bytes 00:26:52.750 Memory Page Size Maximum: 4096 bytes 00:26:52.750 Persistent Memory Region: Not Supported 00:26:52.750 Optional Asynchronous Events Supported 00:26:52.750 Namespace Attribute Notices: Not Supported 00:26:52.750 Firmware Activation Notices: Not Supported 00:26:52.750 ANA Change Notices: Not Supported 00:26:52.750 PLE Aggregate Log Change Notices: Not Supported 00:26:52.750 LBA Status Info Alert Notices: Not Supported 00:26:52.750 EGE Aggregate Log Change Notices: Not Supported 00:26:52.750 Normal NVM Subsystem Shutdown event: Not Supported 00:26:52.750 Zone Descriptor Change Notices: Not Supported 00:26:52.750 Discovery Log Change Notices: Supported 00:26:52.750 Controller Attributes 00:26:52.750 128-bit Host Identifier: Not Supported 00:26:52.750 Non-Operational Permissive Mode: Not Supported 00:26:52.750 NVM Sets: Not Supported 00:26:52.750 Read Recovery Levels: Not Supported 00:26:52.750 Endurance Groups: Not Supported 00:26:52.750 Predictable Latency Mode: Not Supported 00:26:52.750 Traffic Based Keep ALive: Not Supported 00:26:52.750 Namespace Granularity: Not Supported 00:26:52.750 SQ Associations: Not Supported 00:26:52.750 UUID List: Not Supported 00:26:52.750 Multi-Domain Subsystem: Not Supported 00:26:52.750 Fixed Capacity Management: Not Supported 00:26:52.750 Variable Capacity Management: Not Supported 00:26:52.750 Delete Endurance Group: Not Supported 00:26:52.750 Delete NVM Set: Not Supported 00:26:52.750 Extended LBA Formats Supported: Not Supported 00:26:52.750 Flexible Data Placement Supported: Not Supported 00:26:52.750 00:26:52.750 Controller Memory Buffer Support 00:26:52.750 ================================ 00:26:52.750 Supported: No 00:26:52.750 00:26:52.750 Persistent Memory Region Support 00:26:52.750 ================================ 00:26:52.750 Supported: No 00:26:52.750 00:26:52.750 Admin Command Set Attributes 00:26:52.750 ============================ 00:26:52.750 Security Send/Receive: Not Supported 00:26:52.750 Format NVM: Not Supported 00:26:52.750 Firmware Activate/Download: Not Supported 00:26:52.750 Namespace Management: Not Supported 00:26:52.750 Device Self-Test: Not Supported 00:26:52.750 Directives: Not Supported 00:26:52.750 NVMe-MI: Not Supported 00:26:52.750 Virtualization Management: Not Supported 00:26:52.750 Doorbell Buffer Config: Not Supported 00:26:52.750 Get LBA Status Capability: Not Supported 00:26:52.750 Command & Feature Lockdown Capability: Not Supported 00:26:52.750 Abort Command Limit: 1 00:26:52.750 Async Event Request Limit: 1 00:26:52.750 Number of Firmware Slots: N/A 00:26:52.750 Firmware Slot 1 Read-Only: N/A 00:26:52.750 Firmware Activation Without Reset: N/A 00:26:52.750 Multiple Update Detection Support: N/A 00:26:52.750 Firmware Update Granularity: No Information Provided 00:26:52.750 Per-Namespace SMART Log: No 00:26:52.750 Asymmetric Namespace Access Log Page: Not Supported 00:26:52.750 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:52.750 Command Effects Log Page: Not Supported 00:26:52.750 Get Log Page Extended Data: Supported 00:26:52.750 Telemetry Log Pages: Not Supported 00:26:52.750 Persistent Event Log Pages: Not Supported 00:26:52.750 Supported Log Pages Log Page: May Support 00:26:52.750 Commands Supported & Effects Log Page: Not Supported 00:26:52.750 Feature Identifiers & Effects Log Page:May Support 00:26:52.750 NVMe-MI Commands & Effects Log Page: May Support 00:26:52.750 Data Area 4 for Telemetry Log: Not Supported 00:26:52.750 Error Log Page Entries Supported: 1 00:26:52.750 Keep Alive: Not Supported 00:26:52.750 00:26:52.750 NVM Command Set Attributes 00:26:52.750 ========================== 00:26:52.750 Submission Queue Entry Size 00:26:52.750 Max: 1 00:26:52.750 Min: 1 00:26:52.750 Completion Queue Entry Size 00:26:52.750 Max: 1 00:26:52.750 Min: 1 00:26:52.750 Number of Namespaces: 0 00:26:52.750 Compare Command: Not Supported 00:26:52.750 Write Uncorrectable Command: Not Supported 00:26:52.750 Dataset Management Command: Not Supported 00:26:52.750 Write Zeroes Command: Not Supported 00:26:52.750 Set Features Save Field: Not Supported 00:26:52.750 Reservations: Not Supported 00:26:52.750 Timestamp: Not Supported 00:26:52.750 Copy: Not Supported 00:26:52.750 Volatile Write Cache: Not Present 00:26:52.750 Atomic Write Unit (Normal): 1 00:26:52.750 Atomic Write Unit (PFail): 1 00:26:52.750 Atomic Compare & Write Unit: 1 00:26:52.750 Fused Compare & Write: Not Supported 00:26:52.750 Scatter-Gather List 00:26:52.751 SGL Command Set: Supported 00:26:52.751 SGL Keyed: Not Supported 00:26:52.751 SGL Bit Bucket Descriptor: Not Supported 00:26:52.751 SGL Metadata Pointer: Not Supported 00:26:52.751 Oversized SGL: Not Supported 00:26:52.751 SGL Metadata Address: Not Supported 00:26:52.751 SGL Offset: Supported 00:26:52.751 Transport SGL Data Block: Not Supported 00:26:52.751 Replay Protected Memory Block: Not Supported 00:26:52.751 00:26:52.751 Firmware Slot Information 00:26:52.751 ========================= 00:26:52.751 Active slot: 0 00:26:52.751 00:26:52.751 00:26:52.751 Error Log 00:26:52.751 ========= 00:26:52.751 00:26:52.751 Active Namespaces 00:26:52.751 ================= 00:26:52.751 Discovery Log Page 00:26:52.751 ================== 00:26:52.751 Generation Counter: 2 00:26:52.751 Number of Records: 2 00:26:52.751 Record Format: 0 00:26:52.751 00:26:52.751 Discovery Log Entry 0 00:26:52.751 ---------------------- 00:26:52.751 Transport Type: 3 (TCP) 00:26:52.751 Address Family: 1 (IPv4) 00:26:52.751 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:52.751 Entry Flags: 00:26:52.751 Duplicate Returned Information: 0 00:26:52.751 Explicit Persistent Connection Support for Discovery: 0 00:26:52.751 Transport Requirements: 00:26:52.751 Secure Channel: Not Specified 00:26:52.751 Port ID: 1 (0x0001) 00:26:52.751 Controller ID: 65535 (0xffff) 00:26:52.751 Admin Max SQ Size: 32 00:26:52.751 Transport Service Identifier: 4420 00:26:52.751 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:52.751 Transport Address: 10.0.0.1 00:26:52.751 Discovery Log Entry 1 00:26:52.751 ---------------------- 00:26:52.751 Transport Type: 3 (TCP) 00:26:52.751 Address Family: 1 (IPv4) 00:26:52.751 Subsystem Type: 2 (NVM Subsystem) 00:26:52.751 Entry Flags: 00:26:52.751 Duplicate Returned Information: 0 00:26:52.751 Explicit Persistent Connection Support for Discovery: 0 00:26:52.751 Transport Requirements: 00:26:52.751 Secure Channel: Not Specified 00:26:52.751 Port ID: 1 (0x0001) 00:26:52.751 Controller ID: 65535 (0xffff) 00:26:52.751 Admin Max SQ Size: 32 00:26:52.751 Transport Service Identifier: 4420 00:26:52.751 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:52.751 Transport Address: 10.0.0.1 00:26:52.751 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:53.012 get_feature(0x01) failed 00:26:53.012 get_feature(0x02) failed 00:26:53.012 get_feature(0x04) failed 00:26:53.012 ===================================================== 00:26:53.012 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:53.012 ===================================================== 00:26:53.012 Controller Capabilities/Features 00:26:53.012 ================================ 00:26:53.012 Vendor ID: 0000 00:26:53.012 Subsystem Vendor ID: 0000 00:26:53.012 Serial Number: e36052a4b3d7c5565a15 00:26:53.012 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:53.012 Firmware Version: 6.8.9-20 00:26:53.012 Recommended Arb Burst: 6 00:26:53.012 IEEE OUI Identifier: 00 00 00 00:26:53.012 Multi-path I/O 00:26:53.012 May have multiple subsystem ports: Yes 00:26:53.012 May have multiple controllers: Yes 00:26:53.012 Associated with SR-IOV VF: No 00:26:53.012 Max Data Transfer Size: Unlimited 00:26:53.012 Max Number of Namespaces: 1024 00:26:53.012 Max Number of I/O Queues: 128 00:26:53.012 NVMe Specification Version (VS): 1.3 00:26:53.012 NVMe Specification Version (Identify): 1.3 00:26:53.012 Maximum Queue Entries: 1024 00:26:53.012 Contiguous Queues Required: No 00:26:53.012 Arbitration Mechanisms Supported 00:26:53.012 Weighted Round Robin: Not Supported 00:26:53.012 Vendor Specific: Not Supported 00:26:53.012 Reset Timeout: 7500 ms 00:26:53.012 Doorbell Stride: 4 bytes 00:26:53.012 NVM Subsystem Reset: Not Supported 00:26:53.012 Command Sets Supported 00:26:53.012 NVM Command Set: Supported 00:26:53.012 Boot Partition: Not Supported 00:26:53.012 Memory Page Size Minimum: 4096 bytes 00:26:53.012 Memory Page Size Maximum: 4096 bytes 00:26:53.012 Persistent Memory Region: Not Supported 00:26:53.012 Optional Asynchronous Events Supported 00:26:53.012 Namespace Attribute Notices: Supported 00:26:53.012 Firmware Activation Notices: Not Supported 00:26:53.012 ANA Change Notices: Supported 00:26:53.012 PLE Aggregate Log Change Notices: Not Supported 00:26:53.012 LBA Status Info Alert Notices: Not Supported 00:26:53.012 EGE Aggregate Log Change Notices: Not Supported 00:26:53.012 Normal NVM Subsystem Shutdown event: Not Supported 00:26:53.012 Zone Descriptor Change Notices: Not Supported 00:26:53.012 Discovery Log Change Notices: Not Supported 00:26:53.012 Controller Attributes 00:26:53.012 128-bit Host Identifier: Supported 00:26:53.012 Non-Operational Permissive Mode: Not Supported 00:26:53.012 NVM Sets: Not Supported 00:26:53.012 Read Recovery Levels: Not Supported 00:26:53.012 Endurance Groups: Not Supported 00:26:53.012 Predictable Latency Mode: Not Supported 00:26:53.012 Traffic Based Keep ALive: Supported 00:26:53.012 Namespace Granularity: Not Supported 00:26:53.012 SQ Associations: Not Supported 00:26:53.012 UUID List: Not Supported 00:26:53.012 Multi-Domain Subsystem: Not Supported 00:26:53.012 Fixed Capacity Management: Not Supported 00:26:53.012 Variable Capacity Management: Not Supported 00:26:53.012 Delete Endurance Group: Not Supported 00:26:53.012 Delete NVM Set: Not Supported 00:26:53.012 Extended LBA Formats Supported: Not Supported 00:26:53.012 Flexible Data Placement Supported: Not Supported 00:26:53.012 00:26:53.012 Controller Memory Buffer Support 00:26:53.012 ================================ 00:26:53.012 Supported: No 00:26:53.012 00:26:53.012 Persistent Memory Region Support 00:26:53.012 ================================ 00:26:53.012 Supported: No 00:26:53.013 00:26:53.013 Admin Command Set Attributes 00:26:53.013 ============================ 00:26:53.013 Security Send/Receive: Not Supported 00:26:53.013 Format NVM: Not Supported 00:26:53.013 Firmware Activate/Download: Not Supported 00:26:53.013 Namespace Management: Not Supported 00:26:53.013 Device Self-Test: Not Supported 00:26:53.013 Directives: Not Supported 00:26:53.013 NVMe-MI: Not Supported 00:26:53.013 Virtualization Management: Not Supported 00:26:53.013 Doorbell Buffer Config: Not Supported 00:26:53.013 Get LBA Status Capability: Not Supported 00:26:53.013 Command & Feature Lockdown Capability: Not Supported 00:26:53.013 Abort Command Limit: 4 00:26:53.013 Async Event Request Limit: 4 00:26:53.013 Number of Firmware Slots: N/A 00:26:53.013 Firmware Slot 1 Read-Only: N/A 00:26:53.013 Firmware Activation Without Reset: N/A 00:26:53.013 Multiple Update Detection Support: N/A 00:26:53.013 Firmware Update Granularity: No Information Provided 00:26:53.013 Per-Namespace SMART Log: Yes 00:26:53.013 Asymmetric Namespace Access Log Page: Supported 00:26:53.013 ANA Transition Time : 10 sec 00:26:53.013 00:26:53.013 Asymmetric Namespace Access Capabilities 00:26:53.013 ANA Optimized State : Supported 00:26:53.013 ANA Non-Optimized State : Supported 00:26:53.013 ANA Inaccessible State : Supported 00:26:53.013 ANA Persistent Loss State : Supported 00:26:53.013 ANA Change State : Supported 00:26:53.013 ANAGRPID is not changed : No 00:26:53.013 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:53.013 00:26:53.013 ANA Group Identifier Maximum : 128 00:26:53.013 Number of ANA Group Identifiers : 128 00:26:53.013 Max Number of Allowed Namespaces : 1024 00:26:53.013 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:53.013 Command Effects Log Page: Supported 00:26:53.013 Get Log Page Extended Data: Supported 00:26:53.013 Telemetry Log Pages: Not Supported 00:26:53.013 Persistent Event Log Pages: Not Supported 00:26:53.013 Supported Log Pages Log Page: May Support 00:26:53.013 Commands Supported & Effects Log Page: Not Supported 00:26:53.013 Feature Identifiers & Effects Log Page:May Support 00:26:53.013 NVMe-MI Commands & Effects Log Page: May Support 00:26:53.013 Data Area 4 for Telemetry Log: Not Supported 00:26:53.013 Error Log Page Entries Supported: 128 00:26:53.013 Keep Alive: Supported 00:26:53.013 Keep Alive Granularity: 1000 ms 00:26:53.013 00:26:53.013 NVM Command Set Attributes 00:26:53.013 ========================== 00:26:53.013 Submission Queue Entry Size 00:26:53.013 Max: 64 00:26:53.013 Min: 64 00:26:53.013 Completion Queue Entry Size 00:26:53.013 Max: 16 00:26:53.013 Min: 16 00:26:53.013 Number of Namespaces: 1024 00:26:53.013 Compare Command: Not Supported 00:26:53.013 Write Uncorrectable Command: Not Supported 00:26:53.013 Dataset Management Command: Supported 00:26:53.013 Write Zeroes Command: Supported 00:26:53.013 Set Features Save Field: Not Supported 00:26:53.013 Reservations: Not Supported 00:26:53.013 Timestamp: Not Supported 00:26:53.013 Copy: Not Supported 00:26:53.013 Volatile Write Cache: Present 00:26:53.013 Atomic Write Unit (Normal): 1 00:26:53.013 Atomic Write Unit (PFail): 1 00:26:53.013 Atomic Compare & Write Unit: 1 00:26:53.013 Fused Compare & Write: Not Supported 00:26:53.013 Scatter-Gather List 00:26:53.013 SGL Command Set: Supported 00:26:53.013 SGL Keyed: Not Supported 00:26:53.013 SGL Bit Bucket Descriptor: Not Supported 00:26:53.013 SGL Metadata Pointer: Not Supported 00:26:53.013 Oversized SGL: Not Supported 00:26:53.013 SGL Metadata Address: Not Supported 00:26:53.013 SGL Offset: Supported 00:26:53.013 Transport SGL Data Block: Not Supported 00:26:53.013 Replay Protected Memory Block: Not Supported 00:26:53.013 00:26:53.013 Firmware Slot Information 00:26:53.013 ========================= 00:26:53.013 Active slot: 0 00:26:53.013 00:26:53.013 Asymmetric Namespace Access 00:26:53.013 =========================== 00:26:53.013 Change Count : 0 00:26:53.013 Number of ANA Group Descriptors : 1 00:26:53.013 ANA Group Descriptor : 0 00:26:53.013 ANA Group ID : 1 00:26:53.013 Number of NSID Values : 1 00:26:53.013 Change Count : 0 00:26:53.013 ANA State : 1 00:26:53.013 Namespace Identifier : 1 00:26:53.013 00:26:53.013 Commands Supported and Effects 00:26:53.013 ============================== 00:26:53.013 Admin Commands 00:26:53.013 -------------- 00:26:53.013 Get Log Page (02h): Supported 00:26:53.013 Identify (06h): Supported 00:26:53.013 Abort (08h): Supported 00:26:53.013 Set Features (09h): Supported 00:26:53.013 Get Features (0Ah): Supported 00:26:53.013 Asynchronous Event Request (0Ch): Supported 00:26:53.013 Keep Alive (18h): Supported 00:26:53.013 I/O Commands 00:26:53.013 ------------ 00:26:53.013 Flush (00h): Supported 00:26:53.013 Write (01h): Supported LBA-Change 00:26:53.013 Read (02h): Supported 00:26:53.013 Write Zeroes (08h): Supported LBA-Change 00:26:53.013 Dataset Management (09h): Supported 00:26:53.013 00:26:53.013 Error Log 00:26:53.013 ========= 00:26:53.013 Entry: 0 00:26:53.013 Error Count: 0x3 00:26:53.013 Submission Queue Id: 0x0 00:26:53.013 Command Id: 0x5 00:26:53.013 Phase Bit: 0 00:26:53.013 Status Code: 0x2 00:26:53.013 Status Code Type: 0x0 00:26:53.013 Do Not Retry: 1 00:26:53.013 Error Location: 0x28 00:26:53.013 LBA: 0x0 00:26:53.013 Namespace: 0x0 00:26:53.013 Vendor Log Page: 0x0 00:26:53.013 ----------- 00:26:53.013 Entry: 1 00:26:53.013 Error Count: 0x2 00:26:53.013 Submission Queue Id: 0x0 00:26:53.013 Command Id: 0x5 00:26:53.013 Phase Bit: 0 00:26:53.013 Status Code: 0x2 00:26:53.013 Status Code Type: 0x0 00:26:53.013 Do Not Retry: 1 00:26:53.013 Error Location: 0x28 00:26:53.013 LBA: 0x0 00:26:53.013 Namespace: 0x0 00:26:53.013 Vendor Log Page: 0x0 00:26:53.013 ----------- 00:26:53.013 Entry: 2 00:26:53.013 Error Count: 0x1 00:26:53.013 Submission Queue Id: 0x0 00:26:53.013 Command Id: 0x4 00:26:53.013 Phase Bit: 0 00:26:53.013 Status Code: 0x2 00:26:53.013 Status Code Type: 0x0 00:26:53.013 Do Not Retry: 1 00:26:53.013 Error Location: 0x28 00:26:53.013 LBA: 0x0 00:26:53.013 Namespace: 0x0 00:26:53.013 Vendor Log Page: 0x0 00:26:53.013 00:26:53.013 Number of Queues 00:26:53.013 ================ 00:26:53.013 Number of I/O Submission Queues: 128 00:26:53.013 Number of I/O Completion Queues: 128 00:26:53.013 00:26:53.013 ZNS Specific Controller Data 00:26:53.013 ============================ 00:26:53.013 Zone Append Size Limit: 0 00:26:53.013 00:26:53.013 00:26:53.013 Active Namespaces 00:26:53.013 ================= 00:26:53.013 get_feature(0x05) failed 00:26:53.013 Namespace ID:1 00:26:53.013 Command Set Identifier: NVM (00h) 00:26:53.013 Deallocate: Supported 00:26:53.013 Deallocated/Unwritten Error: Not Supported 00:26:53.013 Deallocated Read Value: Unknown 00:26:53.013 Deallocate in Write Zeroes: Not Supported 00:26:53.013 Deallocated Guard Field: 0xFFFF 00:26:53.013 Flush: Supported 00:26:53.013 Reservation: Not Supported 00:26:53.013 Namespace Sharing Capabilities: Multiple Controllers 00:26:53.013 Size (in LBAs): 3750748848 (1788GiB) 00:26:53.013 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:53.013 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:53.013 UUID: f316bca3-6623-4ac1-a592-d56b2d4c324a 00:26:53.013 Thin Provisioning: Not Supported 00:26:53.013 Per-NS Atomic Units: Yes 00:26:53.013 Atomic Write Unit (Normal): 8 00:26:53.013 Atomic Write Unit (PFail): 8 00:26:53.013 Preferred Write Granularity: 8 00:26:53.013 Atomic Compare & Write Unit: 8 00:26:53.013 Atomic Boundary Size (Normal): 0 00:26:53.013 Atomic Boundary Size (PFail): 0 00:26:53.013 Atomic Boundary Offset: 0 00:26:53.013 NGUID/EUI64 Never Reused: No 00:26:53.013 ANA group ID: 1 00:26:53.013 Namespace Write Protected: No 00:26:53.013 Number of LBA Formats: 1 00:26:53.013 Current LBA Format: LBA Format #00 00:26:53.013 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:53.013 00:26:53.013 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:53.013 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.013 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:53.013 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.013 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:53.013 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.013 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.013 rmmod nvme_tcp 00:26:53.013 rmmod nvme_fabrics 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.014 11:08:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:55.557 11:08:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:58.102 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:58.102 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:58.674 00:26:58.675 real 0m18.152s 00:26:58.675 user 0m4.472s 00:26:58.675 sys 0m10.579s 00:26:58.675 11:08:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:58.675 11:08:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:58.675 ************************************ 00:26:58.675 END TEST nvmf_identify_kernel_target 00:26:58.675 ************************************ 00:26:58.675 11:08:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:58.675 11:08:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:58.675 11:08:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:58.675 11:08:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.675 ************************************ 00:26:58.675 START TEST nvmf_auth_host 00:26:58.675 ************************************ 00:26:58.675 11:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:58.675 * Looking for test storage... 00:26:58.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.675 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:58.675 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:58.675 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:58.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.936 --rc genhtml_branch_coverage=1 00:26:58.936 --rc genhtml_function_coverage=1 00:26:58.936 --rc genhtml_legend=1 00:26:58.936 --rc geninfo_all_blocks=1 00:26:58.936 --rc geninfo_unexecuted_blocks=1 00:26:58.936 00:26:58.936 ' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:58.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.936 --rc genhtml_branch_coverage=1 00:26:58.936 --rc genhtml_function_coverage=1 00:26:58.936 --rc genhtml_legend=1 00:26:58.936 --rc geninfo_all_blocks=1 00:26:58.936 --rc geninfo_unexecuted_blocks=1 00:26:58.936 00:26:58.936 ' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:58.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.936 --rc genhtml_branch_coverage=1 00:26:58.936 --rc genhtml_function_coverage=1 00:26:58.936 --rc genhtml_legend=1 00:26:58.936 --rc geninfo_all_blocks=1 00:26:58.936 --rc geninfo_unexecuted_blocks=1 00:26:58.936 00:26:58.936 ' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:58.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.936 --rc genhtml_branch_coverage=1 00:26:58.936 --rc genhtml_function_coverage=1 00:26:58.936 --rc genhtml_legend=1 00:26:58.936 --rc geninfo_all_blocks=1 00:26:58.936 --rc geninfo_unexecuted_blocks=1 00:26:58.936 00:26:58.936 ' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.936 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.937 11:08:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:05.655 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:05.655 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:05.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:05.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.655 11:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:27:05.916 00:27:05.916 --- 10.0.0.2 ping statistics --- 00:27:05.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.916 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:27:05.916 00:27:05.916 --- 10.0.0.1 ping statistics --- 00:27:05.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.916 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3408207 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3408207 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3408207 ']' 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:05.916 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.177 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=56fa928ab4a2a68058d29f3725ad1518 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vPt 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 56fa928ab4a2a68058d29f3725ad1518 0 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 56fa928ab4a2a68058d29f3725ad1518 0 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=56fa928ab4a2a68058d29f3725ad1518 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:06.178 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vPt 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vPt 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.vPt 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dfe93878e502a41d94b341d32c6b42135a977a7e09983c8e9a71322c7a3fff39 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Yga 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dfe93878e502a41d94b341d32c6b42135a977a7e09983c8e9a71322c7a3fff39 3 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dfe93878e502a41d94b341d32c6b42135a977a7e09983c8e9a71322c7a3fff39 3 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dfe93878e502a41d94b341d32c6b42135a977a7e09983c8e9a71322c7a3fff39 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Yga 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Yga 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Yga 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=95c4673b434649c5d7ef614fd074f10df21729d6ab8b821d 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.BLh 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 95c4673b434649c5d7ef614fd074f10df21729d6ab8b821d 0 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 95c4673b434649c5d7ef614fd074f10df21729d6ab8b821d 0 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=95c4673b434649c5d7ef614fd074f10df21729d6ab8b821d 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.BLh 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.BLh 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BLh 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4a3d8036f6e9e46962fc46ed9c4bde4fdf26974e4c187ec4 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.852 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4a3d8036f6e9e46962fc46ed9c4bde4fdf26974e4c187ec4 2 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4a3d8036f6e9e46962fc46ed9c4bde4fdf26974e4c187ec4 2 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4a3d8036f6e9e46962fc46ed9c4bde4fdf26974e4c187ec4 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.852 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.852 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.852 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.439 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1b8b3d3a6ff0770ff7c3cc5853941fda 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qF6 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1b8b3d3a6ff0770ff7c3cc5853941fda 1 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1b8b3d3a6ff0770ff7c3cc5853941fda 1 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1b8b3d3a6ff0770ff7c3cc5853941fda 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:06.440 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qF6 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qF6 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.qF6 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=742d07bd7ba7ef12cf50c5eabc63da58 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.W2o 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 742d07bd7ba7ef12cf50c5eabc63da58 1 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 742d07bd7ba7ef12cf50c5eabc63da58 1 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=742d07bd7ba7ef12cf50c5eabc63da58 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.W2o 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.W2o 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.W2o 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ff93fa54cc374409457ed0c12796fdddaa6d357a288a1167 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YBO 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ff93fa54cc374409457ed0c12796fdddaa6d357a288a1167 2 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ff93fa54cc374409457ed0c12796fdddaa6d357a288a1167 2 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ff93fa54cc374409457ed0c12796fdddaa6d357a288a1167 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:06.700 11:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YBO 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YBO 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.YBO 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5f1c2ebdb9b9b46dea81a714b26cbe27 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.j3H 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5f1c2ebdb9b9b46dea81a714b26cbe27 0 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5f1c2ebdb9b9b46dea81a714b26cbe27 0 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5f1c2ebdb9b9b46dea81a714b26cbe27 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.j3H 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.j3H 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.j3H 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee0e6f6977c8e14623491c4300c9729525d6af0fac2bf2a541dddf11065cdb26 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QoZ 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee0e6f6977c8e14623491c4300c9729525d6af0fac2bf2a541dddf11065cdb26 3 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee0e6f6977c8e14623491c4300c9729525d6af0fac2bf2a541dddf11065cdb26 3 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee0e6f6977c8e14623491c4300c9729525d6af0fac2bf2a541dddf11065cdb26 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:06.700 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QoZ 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QoZ 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.QoZ 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3408207 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3408207 ']' 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vPt 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Yga ]] 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Yga 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BLh 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.852 ]] 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.852 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qF6 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.961 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.W2o ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.W2o 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.YBO 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.j3H ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.j3H 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.QoZ 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:07.223 11:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:10.565 Waiting for block devices as requested 00:27:10.565 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:10.565 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:10.565 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:10.565 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:10.826 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:10.826 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:10.826 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:11.086 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:11.086 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:11.345 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:11.345 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:11.345 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:11.345 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:11.605 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:11.605 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:11.605 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:11.867 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:12.808 No valid GPT data, bailing 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:12.808 11:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:12.808 00:27:12.808 Discovery Log Number of Records 2, Generation counter 2 00:27:12.808 =====Discovery Log Entry 0====== 00:27:12.808 trtype: tcp 00:27:12.808 adrfam: ipv4 00:27:12.808 subtype: current discovery subsystem 00:27:12.808 treq: not specified, sq flow control disable supported 00:27:12.808 portid: 1 00:27:12.808 trsvcid: 4420 00:27:12.808 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:12.808 traddr: 10.0.0.1 00:27:12.808 eflags: none 00:27:12.808 sectype: none 00:27:12.808 =====Discovery Log Entry 1====== 00:27:12.808 trtype: tcp 00:27:12.808 adrfam: ipv4 00:27:12.808 subtype: nvme subsystem 00:27:12.808 treq: not specified, sq flow control disable supported 00:27:12.808 portid: 1 00:27:12.808 trsvcid: 4420 00:27:12.808 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:12.808 traddr: 10.0.0.1 00:27:12.808 eflags: none 00:27:12.808 sectype: none 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.808 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.069 nvme0n1 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:13.069 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.070 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.330 nvme0n1 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.330 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.331 nvme0n1 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.331 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.591 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.591 nvme0n1 00:27:13.592 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.592 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.592 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.592 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.592 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.592 11:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.592 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.592 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.592 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.592 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.852 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.853 nvme0n1 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.853 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.114 nvme0n1 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.114 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.375 nvme0n1 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.375 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.376 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.637 nvme0n1 00:27:14.637 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.637 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.637 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.637 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.637 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.637 11:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.637 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.898 nvme0n1 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.898 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.159 nvme0n1 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.159 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 nvme0n1 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.682 11:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.943 nvme0n1 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.943 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.205 nvme0n1 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.205 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.466 nvme0n1 00:27:16.466 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.466 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.466 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.466 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.466 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.466 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.727 11:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.988 nvme0n1 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.988 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.989 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.250 nvme0n1 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.250 11:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.823 nvme0n1 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.823 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.405 nvme0n1 00:27:18.405 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.405 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.405 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:18.406 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.407 11:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.981 nvme0n1 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.981 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.982 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.554 nvme0n1 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.554 11:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.124 nvme0n1 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.124 11:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.065 nvme0n1 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.065 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.066 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.636 nvme0n1 00:27:21.636 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.636 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.636 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.636 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.636 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.636 11:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.636 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.896 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.467 nvme0n1 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.467 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.728 11:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.298 nvme0n1 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:23.298 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.299 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.559 11:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.130 nvme0n1 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.130 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.391 nvme0n1 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.391 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.652 nvme0n1 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.652 11:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.913 nvme0n1 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.913 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.174 nvme0n1 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.174 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.175 nvme0n1 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.175 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.435 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.436 nvme0n1 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.436 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.696 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.696 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.697 11:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.697 nvme0n1 00:27:25.697 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.697 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.697 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.697 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.697 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.958 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.219 nvme0n1 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.219 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.220 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 nvme0n1 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.480 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.481 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.742 nvme0n1 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.742 11:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.742 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.003 nvme0n1 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.003 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.264 nvme0n1 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.264 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.524 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.785 nvme0n1 00:27:27.785 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.785 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.785 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.785 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.785 11:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.785 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.046 nvme0n1 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.046 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.047 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.312 nvme0n1 00:27:28.312 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.312 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.312 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.312 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.312 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.312 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.575 11:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.146 nvme0n1 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.146 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.407 nvme0n1 00:27:29.407 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.407 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.407 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.407 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.407 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.407 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.668 11:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.241 nvme0n1 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.241 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.242 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.242 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.242 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.242 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.242 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.242 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.242 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.502 nvme0n1 00:27:30.502 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.502 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.502 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.502 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.502 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.502 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.763 11:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.025 nvme0n1 00:27:31.025 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.025 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.025 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.025 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.025 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.286 11:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 nvme0n1 00:27:31.857 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.117 11:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.058 nvme0n1 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.058 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.628 nvme0n1 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.628 11:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.628 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.629 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.568 nvme0n1 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.568 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.569 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.509 nvme0n1 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.509 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.510 nvme0n1 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.510 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.771 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.771 nvme0n1 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.771 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.032 nvme0n1 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.032 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.292 nvme0n1 00:27:36.292 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.292 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.292 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.293 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.554 nvme0n1 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.554 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.815 nvme0n1 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.815 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.816 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 nvme0n1 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.342 nvme0n1 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.342 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.603 nvme0n1 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.603 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.864 nvme0n1 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.864 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.865 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.125 nvme0n1 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.125 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.384 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.384 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.385 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.647 nvme0n1 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.647 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.908 nvme0n1 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.908 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.169 nvme0n1 00:27:39.169 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.169 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.169 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.169 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.169 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.169 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.429 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.430 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.690 nvme0n1 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.690 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.691 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.691 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.264 nvme0n1 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.264 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.835 nvme0n1 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.835 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.407 nvme0n1 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.407 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 nvme0n1 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.979 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.552 nvme0n1 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTZmYTkyOGFiNGEyYTY4MDU4ZDI5ZjM3MjVhZDE1MTgy2xxG: 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZlOTM4NzhlNTAyYTQxZDk0YjM0MWQzMmM2YjQyMTM1YTk3N2E3ZTA5OTgzYzhlOWE3MTMyMmM3YTNmZmYzOUtyr80=: 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.552 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.172 nvme0n1 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:43.172 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:43.466 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:43.466 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.466 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.466 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.466 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.466 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.466 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.467 11:09:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.049 nvme0n1 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.049 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.990 nvme0n1 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmY5M2ZhNTRjYzM3NDQwOTQ1N2VkMGMxMjc5NmZkZGRhYTZkMzU3YTI4OGExMTY3qWa/iA==: 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWYxYzJlYmRiOWI5YjQ2ZGVhODFhNzE0YjI2Y2JlMjf/HPzp: 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.990 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.932 nvme0n1 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.932 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUwZTZmNjk3N2M4ZTE0NjIzNDkxYzQzMDBjOTcyOTUyNWQ2YWYwZmFjMmJmMmE1NDFkZGRmMTEwNjVjZGIyNiIegcM=: 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.933 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.506 nvme0n1 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.506 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:46.767 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.768 request: 00:27:46.768 { 00:27:46.768 "name": "nvme0", 00:27:46.768 "trtype": "tcp", 00:27:46.768 "traddr": "10.0.0.1", 00:27:46.768 "adrfam": "ipv4", 00:27:46.768 "trsvcid": "4420", 00:27:46.768 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:46.768 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:46.768 "prchk_reftag": false, 00:27:46.768 "prchk_guard": false, 00:27:46.768 "hdgst": false, 00:27:46.768 "ddgst": false, 00:27:46.768 "allow_unrecognized_csi": false, 00:27:46.768 "method": "bdev_nvme_attach_controller", 00:27:46.768 "req_id": 1 00:27:46.768 } 00:27:46.768 Got JSON-RPC error response 00:27:46.768 response: 00:27:46.768 { 00:27:46.768 "code": -5, 00:27:46.768 "message": "Input/output error" 00:27:46.768 } 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:46.768 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.768 request: 00:27:46.768 { 00:27:46.768 "name": "nvme0", 00:27:46.768 "trtype": "tcp", 00:27:46.768 "traddr": "10.0.0.1", 00:27:46.768 "adrfam": "ipv4", 00:27:46.768 "trsvcid": "4420", 00:27:46.768 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:46.768 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:46.768 "prchk_reftag": false, 00:27:46.768 "prchk_guard": false, 00:27:46.768 "hdgst": false, 00:27:46.768 "ddgst": false, 00:27:46.768 "dhchap_key": "key2", 00:27:46.768 "allow_unrecognized_csi": false, 00:27:46.768 "method": "bdev_nvme_attach_controller", 00:27:46.768 "req_id": 1 00:27:46.768 } 00:27:46.768 Got JSON-RPC error response 00:27:46.768 response: 00:27:46.768 { 00:27:46.768 "code": -5, 00:27:46.768 "message": "Input/output error" 00:27:46.768 } 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.768 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.030 request: 00:27:47.030 { 00:27:47.030 "name": "nvme0", 00:27:47.030 "trtype": "tcp", 00:27:47.030 "traddr": "10.0.0.1", 00:27:47.030 "adrfam": "ipv4", 00:27:47.030 "trsvcid": "4420", 00:27:47.030 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:47.030 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:47.030 "prchk_reftag": false, 00:27:47.030 "prchk_guard": false, 00:27:47.030 "hdgst": false, 00:27:47.030 "ddgst": false, 00:27:47.030 "dhchap_key": "key1", 00:27:47.030 "dhchap_ctrlr_key": "ckey2", 00:27:47.030 "allow_unrecognized_csi": false, 00:27:47.030 "method": "bdev_nvme_attach_controller", 00:27:47.030 "req_id": 1 00:27:47.030 } 00:27:47.030 Got JSON-RPC error response 00:27:47.030 response: 00:27:47.030 { 00:27:47.030 "code": -5, 00:27:47.030 "message": "Input/output error" 00:27:47.030 } 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.030 nvme0n1 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.030 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:47.291 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.292 request: 00:27:47.292 { 00:27:47.292 "name": "nvme0", 00:27:47.292 "dhchap_key": "key1", 00:27:47.292 "dhchap_ctrlr_key": "ckey2", 00:27:47.292 "method": "bdev_nvme_set_keys", 00:27:47.292 "req_id": 1 00:27:47.292 } 00:27:47.292 Got JSON-RPC error response 00:27:47.292 response: 00:27:47.292 { 00:27:47.292 "code": -13, 00:27:47.292 "message": "Permission denied" 00:27:47.292 } 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:47.292 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:48.234 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.234 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.234 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.234 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:48.494 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.494 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:48.494 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTVjNDY3M2I0MzQ2NDljNWQ3ZWY2MTRmZDA3NGYxMGRmMjE3MjlkNmFiOGI4MjFkM8bqcw==: 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: ]] 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGEzZDgwMzZmNmU5ZTQ2OTYyZmM0NmVkOWM0YmRlNGZkZjI2OTc0ZTRjMTg3ZWM02UQozw==: 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.437 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.698 nvme0n1 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.698 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWI4YjNkM2E2ZmYwNzcwZmY3YzNjYzU4NTM5NDFmZGHo8uEu: 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: ]] 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzQyZDA3YmQ3YmE3ZWYxMmNmNTBjNWVhYmM2M2RhNTjuAj+J: 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.699 request: 00:27:49.699 { 00:27:49.699 "name": "nvme0", 00:27:49.699 "dhchap_key": "key2", 00:27:49.699 "dhchap_ctrlr_key": "ckey1", 00:27:49.699 "method": "bdev_nvme_set_keys", 00:27:49.699 "req_id": 1 00:27:49.699 } 00:27:49.699 Got JSON-RPC error response 00:27:49.699 response: 00:27:49.699 { 00:27:49.699 "code": -13, 00:27:49.699 "message": "Permission denied" 00:27:49.699 } 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.699 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.699 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:49.699 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:50.641 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.641 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:50.641 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.641 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.641 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.901 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:50.901 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:50.901 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:50.901 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.902 rmmod nvme_tcp 00:27:50.902 rmmod nvme_fabrics 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3408207 ']' 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3408207 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3408207 ']' 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3408207 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3408207 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3408207' 00:27:50.902 killing process with pid 3408207 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3408207 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3408207 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.902 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:53.446 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.447 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:53.447 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:53.447 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:56.750 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:56.750 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:57.010 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vPt /tmp/spdk.key-null.BLh /tmp/spdk.key-sha256.qF6 /tmp/spdk.key-sha384.YBO /tmp/spdk.key-sha512.QoZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:57.010 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:00.317 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:00.317 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:00.317 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:00.578 00:28:00.578 real 1m1.873s 00:28:00.578 user 0m55.805s 00:28:00.578 sys 0m15.012s 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.578 ************************************ 00:28:00.578 END TEST nvmf_auth_host 00:28:00.578 ************************************ 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.578 ************************************ 00:28:00.578 START TEST nvmf_digest 00:28:00.578 ************************************ 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:00.578 * Looking for test storage... 00:28:00.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:00.578 11:09:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:00.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.841 --rc genhtml_branch_coverage=1 00:28:00.841 --rc genhtml_function_coverage=1 00:28:00.841 --rc genhtml_legend=1 00:28:00.841 --rc geninfo_all_blocks=1 00:28:00.841 --rc geninfo_unexecuted_blocks=1 00:28:00.841 00:28:00.841 ' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:00.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.841 --rc genhtml_branch_coverage=1 00:28:00.841 --rc genhtml_function_coverage=1 00:28:00.841 --rc genhtml_legend=1 00:28:00.841 --rc geninfo_all_blocks=1 00:28:00.841 --rc geninfo_unexecuted_blocks=1 00:28:00.841 00:28:00.841 ' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:00.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.841 --rc genhtml_branch_coverage=1 00:28:00.841 --rc genhtml_function_coverage=1 00:28:00.841 --rc genhtml_legend=1 00:28:00.841 --rc geninfo_all_blocks=1 00:28:00.841 --rc geninfo_unexecuted_blocks=1 00:28:00.841 00:28:00.841 ' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:00.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.841 --rc genhtml_branch_coverage=1 00:28:00.841 --rc genhtml_function_coverage=1 00:28:00.841 --rc genhtml_legend=1 00:28:00.841 --rc geninfo_all_blocks=1 00:28:00.841 --rc geninfo_unexecuted_blocks=1 00:28:00.841 00:28:00.841 ' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.841 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.842 11:09:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:07.432 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:07.432 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.432 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:07.433 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:07.433 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.433 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.694 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.694 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.694 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:07.694 11:09:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:07.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:28:07.694 00:28:07.694 --- 10.0.0.2 ping statistics --- 00:28:07.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.694 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:28:07.694 00:28:07.694 --- 10.0.0.1 ping statistics --- 00:28:07.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.694 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.694 ************************************ 00:28:07.694 START TEST nvmf_digest_clean 00:28:07.694 ************************************ 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.694 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3426083 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3426083 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3426083 ']' 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:07.955 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:07.955 [2024-11-06 11:09:59.173587] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:07.955 [2024-11-06 11:09:59.173640] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.955 [2024-11-06 11:09:59.252411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.955 [2024-11-06 11:09:59.289170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.955 [2024-11-06 11:09:59.289206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.955 [2024-11-06 11:09:59.289214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.955 [2024-11-06 11:09:59.289221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.955 [2024-11-06 11:09:59.289227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.955 [2024-11-06 11:09:59.289799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.898 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.898 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:08.898 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:08.899 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.899 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.899 11:09:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.899 null0 00:28:08.899 [2024-11-06 11:10:00.079084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.899 [2024-11-06 11:10:00.103284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3426180 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3426180 /var/tmp/bperf.sock 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3426180 ']' 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:08.899 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.899 [2024-11-06 11:10:00.159369] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:08.899 [2024-11-06 11:10:00.159420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426180 ] 00:28:08.899 [2024-11-06 11:10:00.247517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.899 [2024-11-06 11:10:00.283262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.842 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:09.842 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:09.842 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:09.842 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:09.842 11:10:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:09.842 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.842 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.102 nvme0n1 00:28:10.102 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:10.102 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.362 Running I/O for 2 seconds... 00:28:12.245 19666.00 IOPS, 76.82 MiB/s [2024-11-06T10:10:03.667Z] 19621.00 IOPS, 76.64 MiB/s 00:28:12.245 Latency(us) 00:28:12.245 [2024-11-06T10:10:03.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.246 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:12.246 nvme0n1 : 2.04 19253.86 75.21 0.00 0.00 6511.74 3112.96 45875.20 00:28:12.246 [2024-11-06T10:10:03.668Z] =================================================================================================================== 00:28:12.246 [2024-11-06T10:10:03.668Z] Total : 19253.86 75.21 0.00 0.00 6511.74 3112.96 45875.20 00:28:12.246 { 00:28:12.246 "results": [ 00:28:12.246 { 00:28:12.246 "job": "nvme0n1", 00:28:12.246 "core_mask": "0x2", 00:28:12.246 "workload": "randread", 00:28:12.246 "status": "finished", 00:28:12.246 "queue_depth": 128, 00:28:12.246 "io_size": 4096, 00:28:12.246 "runtime": 2.044785, 00:28:12.246 "iops": 19253.857985069335, 00:28:12.246 "mibps": 75.21038275417709, 00:28:12.246 "io_failed": 0, 00:28:12.246 "io_timeout": 0, 00:28:12.246 "avg_latency_us": 6511.737811870291, 00:28:12.246 "min_latency_us": 3112.96, 00:28:12.246 "max_latency_us": 45875.2 00:28:12.246 } 00:28:12.246 ], 00:28:12.246 "core_count": 1 00:28:12.246 } 00:28:12.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:12.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:12.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:12.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:12.246 | select(.opcode=="crc32c") 00:28:12.246 | "\(.module_name) \(.executed)"' 00:28:12.246 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3426180 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3426180 ']' 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3426180 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3426180 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3426180' 00:28:12.507 killing process with pid 3426180 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3426180 00:28:12.507 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.507 00:28:12.507 Latency(us) 00:28:12.507 [2024-11-06T10:10:03.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.507 [2024-11-06T10:10:03.929Z] =================================================================================================================== 00:28:12.507 [2024-11-06T10:10:03.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.507 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3426180 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3427021 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3427021 /var/tmp/bperf.sock 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3427021 ']' 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:12.769 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.769 [2024-11-06 11:10:04.031051] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:12.769 [2024-11-06 11:10:04.031110] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427021 ] 00:28:12.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.769 Zero copy mechanism will not be used. 00:28:12.769 [2024-11-06 11:10:04.112890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.769 [2024-11-06 11:10:04.142270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.711 11:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:13.711 11:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:13.711 11:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.711 11:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.711 11:10:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:13.711 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.711 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.971 nvme0n1 00:28:13.971 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:13.971 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.232 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.232 Zero copy mechanism will not be used. 00:28:14.232 Running I/O for 2 seconds... 00:28:16.119 3901.00 IOPS, 487.62 MiB/s [2024-11-06T10:10:07.541Z] 3983.00 IOPS, 497.88 MiB/s 00:28:16.119 Latency(us) 00:28:16.119 [2024-11-06T10:10:07.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.119 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:16.119 nvme0n1 : 2.00 3983.47 497.93 0.00 0.00 4014.18 624.64 15837.87 00:28:16.119 [2024-11-06T10:10:07.541Z] =================================================================================================================== 00:28:16.119 [2024-11-06T10:10:07.541Z] Total : 3983.47 497.93 0.00 0.00 4014.18 624.64 15837.87 00:28:16.119 { 00:28:16.119 "results": [ 00:28:16.119 { 00:28:16.119 "job": "nvme0n1", 00:28:16.119 "core_mask": "0x2", 00:28:16.119 "workload": "randread", 00:28:16.119 "status": "finished", 00:28:16.119 "queue_depth": 16, 00:28:16.119 "io_size": 131072, 00:28:16.119 "runtime": 2.003781, 00:28:16.119 "iops": 3983.4692513802656, 00:28:16.119 "mibps": 497.9336564225332, 00:28:16.119 "io_failed": 0, 00:28:16.119 "io_timeout": 0, 00:28:16.119 "avg_latency_us": 4014.182203290737, 00:28:16.119 "min_latency_us": 624.64, 00:28:16.119 "max_latency_us": 15837.866666666667 00:28:16.119 } 00:28:16.119 ], 00:28:16.119 "core_count": 1 00:28:16.119 } 00:28:16.119 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.119 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.119 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.119 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.119 | select(.opcode=="crc32c") 00:28:16.119 | "\(.module_name) \(.executed)"' 00:28:16.119 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3427021 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3427021 ']' 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3427021 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3427021 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3427021' 00:28:16.380 killing process with pid 3427021 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3427021 00:28:16.380 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.380 00:28:16.380 Latency(us) 00:28:16.380 [2024-11-06T10:10:07.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.380 [2024-11-06T10:10:07.802Z] =================================================================================================================== 00:28:16.380 [2024-11-06T10:10:07.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3427021 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3427802 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3427802 /var/tmp/bperf.sock 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3427802 ']' 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:16.380 11:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.641 [2024-11-06 11:10:07.844928] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:16.641 [2024-11-06 11:10:07.844990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427802 ] 00:28:16.641 [2024-11-06 11:10:07.930703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.641 [2024-11-06 11:10:07.960125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.212 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:17.212 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:17.212 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.212 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.212 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.473 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.474 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.736 nvme0n1 00:28:17.736 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:17.736 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.996 Running I/O for 2 seconds... 00:28:19.878 21579.00 IOPS, 84.29 MiB/s [2024-11-06T10:10:11.300Z] 21622.00 IOPS, 84.46 MiB/s 00:28:19.878 Latency(us) 00:28:19.878 [2024-11-06T10:10:11.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.878 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:19.878 nvme0n1 : 2.01 21624.53 84.47 0.00 0.00 5911.98 2252.80 10212.69 00:28:19.878 [2024-11-06T10:10:11.300Z] =================================================================================================================== 00:28:19.878 [2024-11-06T10:10:11.300Z] Total : 21624.53 84.47 0.00 0.00 5911.98 2252.80 10212.69 00:28:19.878 { 00:28:19.878 "results": [ 00:28:19.878 { 00:28:19.878 "job": "nvme0n1", 00:28:19.878 "core_mask": "0x2", 00:28:19.878 "workload": "randwrite", 00:28:19.878 "status": "finished", 00:28:19.878 "queue_depth": 128, 00:28:19.878 "io_size": 4096, 00:28:19.878 "runtime": 2.005685, 00:28:19.878 "iops": 21624.532267030965, 00:28:19.878 "mibps": 84.47082916808971, 00:28:19.878 "io_failed": 0, 00:28:19.878 "io_timeout": 0, 00:28:19.878 "avg_latency_us": 5911.980531833134, 00:28:19.878 "min_latency_us": 2252.8, 00:28:19.878 "max_latency_us": 10212.693333333333 00:28:19.878 } 00:28:19.878 ], 00:28:19.878 "core_count": 1 00:28:19.878 } 00:28:19.878 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:19.878 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:19.878 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:19.878 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:19.878 | select(.opcode=="crc32c") 00:28:19.878 | "\(.module_name) \(.executed)"' 00:28:19.878 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3427802 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3427802 ']' 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3427802 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3427802 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3427802' 00:28:20.138 killing process with pid 3427802 00:28:20.138 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3427802 00:28:20.138 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.138 00:28:20.138 Latency(us) 00:28:20.139 [2024-11-06T10:10:11.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.139 [2024-11-06T10:10:11.561Z] =================================================================================================================== 00:28:20.139 [2024-11-06T10:10:11.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.139 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3427802 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3428488 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3428488 /var/tmp/bperf.sock 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3428488 ']' 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:20.399 11:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.399 [2024-11-06 11:10:11.635021] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:20.399 [2024-11-06 11:10:11.635081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428488 ] 00:28:20.399 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.399 Zero copy mechanism will not be used. 00:28:20.399 [2024-11-06 11:10:11.715528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.399 [2024-11-06 11:10:11.745074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.340 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:21.340 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:21.340 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:21.340 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:21.340 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.340 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.340 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.600 nvme0n1 00:28:21.600 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:21.600 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.600 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.600 Zero copy mechanism will not be used. 00:28:21.600 Running I/O for 2 seconds... 00:28:23.923 3364.00 IOPS, 420.50 MiB/s [2024-11-06T10:10:15.345Z] 3987.00 IOPS, 498.38 MiB/s 00:28:23.923 Latency(us) 00:28:23.923 [2024-11-06T10:10:15.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.923 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:23.923 nvme0n1 : 2.01 3983.80 497.97 0.00 0.00 4009.00 1829.55 7045.12 00:28:23.923 [2024-11-06T10:10:15.346Z] =================================================================================================================== 00:28:23.924 [2024-11-06T10:10:15.346Z] Total : 3983.80 497.97 0.00 0.00 4009.00 1829.55 7045.12 00:28:23.924 { 00:28:23.924 "results": [ 00:28:23.924 { 00:28:23.924 "job": "nvme0n1", 00:28:23.924 "core_mask": "0x2", 00:28:23.924 "workload": "randwrite", 00:28:23.924 "status": "finished", 00:28:23.924 "queue_depth": 16, 00:28:23.924 "io_size": 131072, 00:28:23.924 "runtime": 2.006628, 00:28:23.924 "iops": 3983.7976944406237, 00:28:23.924 "mibps": 497.97471180507796, 00:28:23.924 "io_failed": 0, 00:28:23.924 "io_timeout": 0, 00:28:23.924 "avg_latency_us": 4008.997147860896, 00:28:23.924 "min_latency_us": 1829.5466666666666, 00:28:23.924 "max_latency_us": 7045.12 00:28:23.924 } 00:28:23.924 ], 00:28:23.924 "core_count": 1 00:28:23.924 } 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.924 | select(.opcode=="crc32c") 00:28:23.924 | "\(.module_name) \(.executed)"' 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3428488 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3428488 ']' 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3428488 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3428488 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3428488' 00:28:23.924 killing process with pid 3428488 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3428488 00:28:23.924 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.924 00:28:23.924 Latency(us) 00:28:23.924 [2024-11-06T10:10:15.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.924 [2024-11-06T10:10:15.346Z] =================================================================================================================== 00:28:23.924 [2024-11-06T10:10:15.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.924 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3428488 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3426083 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3426083 ']' 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3426083 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3426083 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3426083' 00:28:24.184 killing process with pid 3426083 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3426083 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3426083 00:28:24.184 00:28:24.184 real 0m16.452s 00:28:24.184 user 0m32.740s 00:28:24.184 sys 0m3.443s 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.184 ************************************ 00:28:24.184 END TEST nvmf_digest_clean 00:28:24.184 ************************************ 00:28:24.184 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.445 ************************************ 00:28:24.445 START TEST nvmf_digest_error 00:28:24.445 ************************************ 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3429201 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3429201 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3429201 ']' 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.445 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.445 [2024-11-06 11:10:15.707292] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:24.445 [2024-11-06 11:10:15.707341] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.445 [2024-11-06 11:10:15.784944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.445 [2024-11-06 11:10:15.819230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.445 [2024-11-06 11:10:15.819276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.445 [2024-11-06 11:10:15.819284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.445 [2024-11-06 11:10:15.819290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.445 [2024-11-06 11:10:15.819296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.445 [2024-11-06 11:10:15.819855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 [2024-11-06 11:10:16.529872] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 null0 00:28:25.386 [2024-11-06 11:10:16.612162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.386 [2024-11-06 11:10:16.636368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3429542 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3429542 /var/tmp/bperf.sock 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3429542 ']' 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:25.386 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 [2024-11-06 11:10:16.692487] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:25.386 [2024-11-06 11:10:16.692536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429542 ] 00:28:25.386 [2024-11-06 11:10:16.775274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.386 [2024-11-06 11:10:16.805127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.326 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.586 nvme0n1 00:28:26.586 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:26.586 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.586 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.586 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.586 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.587 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.587 Running I/O for 2 seconds... 00:28:26.847 [2024-11-06 11:10:18.027442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.027474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.027483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.036347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.036366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.036373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.050374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.050392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.050399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.064032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.064051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.064058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.076535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.076553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.076560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.090350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.090368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.090374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.103266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.103283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.103289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.114617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.114635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.114641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.128052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.128070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.128081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.140143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.140160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.140167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.153698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.153715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.153721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.165930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.165946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.165953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.177843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.177860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.177866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-11-06 11:10:18.190788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.847 [2024-11-06 11:10:18.190806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-11-06 11:10:18.190812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.848 [2024-11-06 11:10:18.203676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.848 [2024-11-06 11:10:18.203693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.848 [2024-11-06 11:10:18.203700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.848 [2024-11-06 11:10:18.216825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.848 [2024-11-06 11:10:18.216843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.848 [2024-11-06 11:10:18.216849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.848 [2024-11-06 11:10:18.229307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.848 [2024-11-06 11:10:18.229324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.848 [2024-11-06 11:10:18.229330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.848 [2024-11-06 11:10:18.242443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.848 [2024-11-06 11:10:18.242460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.848 [2024-11-06 11:10:18.242466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.848 [2024-11-06 11:10:18.254078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.848 [2024-11-06 11:10:18.254095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.848 [2024-11-06 11:10:18.254102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.848 [2024-11-06 11:10:18.265911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:26.848 [2024-11-06 11:10:18.265928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.848 [2024-11-06 11:10:18.265934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.108 [2024-11-06 11:10:18.279181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.108 [2024-11-06 11:10:18.279198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.108 [2024-11-06 11:10:18.279205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.108 [2024-11-06 11:10:18.291885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.108 [2024-11-06 11:10:18.291902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.108 [2024-11-06 11:10:18.291909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.305031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.305047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.305054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.317606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.317622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.317629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.329325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.329342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.329350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.342008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.342024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.342035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.352875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.352893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.352899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.365494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.365511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.365518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.378431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.378448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.378454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.390905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.390921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.390928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.404650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.404666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.404673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.417902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.417919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.417925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.429664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.429681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.429688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.442744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.442767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.442773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.455964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.455983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.455990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.470298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.470314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.470320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.482297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.482314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.482320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.495036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.495052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.495059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.506602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.506619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.506626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.109 [2024-11-06 11:10:18.519945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.109 [2024-11-06 11:10:18.519962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.109 [2024-11-06 11:10:18.519968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.532132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.532149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.532156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.545454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.545470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.545476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.555014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.555030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.555036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.568600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.568617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.568624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.581421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.581438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.581444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.594762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.594778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.594785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.607506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.607522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.607529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.619796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.619812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.619819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.632835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.632852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.632859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.646157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.646174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.646180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.658478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.370 [2024-11-06 11:10:18.658494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.370 [2024-11-06 11:10:18.658500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.370 [2024-11-06 11:10:18.671402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.671419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.671429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.682335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.682351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.682358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.695994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.696011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.696017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.709692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.709710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.709716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.723499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.723516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.723523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.734723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.734740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.734751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.747061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.747078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.747084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.759983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.759999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.760005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.771792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.771809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.771815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.371 [2024-11-06 11:10:18.783544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.371 [2024-11-06 11:10:18.783560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.371 [2024-11-06 11:10:18.783566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.797019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.797036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.797042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.810786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.810803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.810810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.823483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.823500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.823506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.835962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.835978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.835985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.847607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.847623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.847630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.860725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.860742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.860753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.874538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.874554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.874561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.887143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.887159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.887169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.898438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.898455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.898461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.911132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.911150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.911156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.924564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.924581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.924587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.938400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.938417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.938423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.951166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.951183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.951190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.964601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.964617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.964623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.974300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.974316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.974322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:18.989180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:18.989196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:18.989202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:19.001246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:19.001267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:19.001273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 19981.00 IOPS, 78.05 MiB/s [2024-11-06T10:10:19.054Z] [2024-11-06 11:10:19.013115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.632 [2024-11-06 11:10:19.013132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.632 [2024-11-06 11:10:19.013138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.632 [2024-11-06 11:10:19.028373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.633 [2024-11-06 11:10:19.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.633 [2024-11-06 11:10:19.028396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.633 [2024-11-06 11:10:19.040493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.633 [2024-11-06 11:10:19.040509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.633 [2024-11-06 11:10:19.040516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.893 [2024-11-06 11:10:19.052270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.893 [2024-11-06 11:10:19.052288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.893 [2024-11-06 11:10:19.052294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.893 [2024-11-06 11:10:19.063030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.893 [2024-11-06 11:10:19.063047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.893 [2024-11-06 11:10:19.063053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.893 [2024-11-06 11:10:19.076205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.893 [2024-11-06 11:10:19.076221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.893 [2024-11-06 11:10:19.076228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.893 [2024-11-06 11:10:19.090718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.893 [2024-11-06 11:10:19.090735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.893 [2024-11-06 11:10:19.090741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.893 [2024-11-06 11:10:19.103357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.893 [2024-11-06 11:10:19.103373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.893 [2024-11-06 11:10:19.103380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.893 [2024-11-06 11:10:19.115284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.893 [2024-11-06 11:10:19.115301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.893 [2024-11-06 11:10:19.115308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.127213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.127230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.127237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.140660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.140677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.140683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.154136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.154153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.154160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.167305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.167322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.167329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.178819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.178836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.178842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.190762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.190780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.190786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.204659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.204676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.204683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.216671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.216691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.216698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.228675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.228692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.228698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.241579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.241597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.241604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.254346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.254362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.254369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.266903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.266920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.266926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.280011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.280028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.280034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.292586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.292603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.292610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.894 [2024-11-06 11:10:19.302851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:27.894 [2024-11-06 11:10:19.302867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.894 [2024-11-06 11:10:19.302874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.316482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.316498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.316504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.329634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.329652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.329658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.342582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.342599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.342606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.355475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.355493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.355499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.368417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.368435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.368442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.381130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.381147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.381154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.393297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.393314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.393321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.405913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.405930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.405937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.155 [2024-11-06 11:10:19.417647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.155 [2024-11-06 11:10:19.417664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.155 [2024-11-06 11:10:19.417671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.431047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.431065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.431074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.443888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.443905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.443911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.456447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.456464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.456471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.467918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.467936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.467943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.480211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.480228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.480235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.492972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.492989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.492995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.507154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.507172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.507178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.519416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.519433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.519439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.530275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.530292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.530298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.542893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.542913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.542920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.556901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.556917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.556924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.156 [2024-11-06 11:10:19.568443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.156 [2024-11-06 11:10:19.568460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.156 [2024-11-06 11:10:19.568466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.580292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.580309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.580315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.594063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.594081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.594087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.609058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.609075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.609081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.621254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.621271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.621277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.634499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.634516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.634522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.646390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.646406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.646413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.657307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.657324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.657330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.670424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.670441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.670447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.683810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.683826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.683833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.696325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.696341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.696347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.707059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.707077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.707083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.720824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.720841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.720848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.734554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.734572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.734578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.747266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.747284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.747290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.760046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.760063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.760073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.771568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.771585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.771591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.784735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.784758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.784765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.796812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.796830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.796836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.810631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.810647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.810653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.821660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.821677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.821683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.416 [2024-11-06 11:10:19.833634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.416 [2024-11-06 11:10:19.833651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.416 [2024-11-06 11:10:19.833658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.846819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.846836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.846842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.859653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.859670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.859676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.872509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.872525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.872532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.883530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.883547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.883554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.897159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.897176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.897182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.910515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.910532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.910538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.922935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.922951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.922957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.933904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.933921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.948555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.948571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.948578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.961358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.961374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.961380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.972420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.972436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.972446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.984905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.984922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.984929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 [2024-11-06 11:10:19.998760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:19.998776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:19.998782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 20112.50 IOPS, 78.56 MiB/s [2024-11-06T10:10:20.099Z] [2024-11-06 11:10:20.010646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5c5c0) 00:28:28.677 [2024-11-06 11:10:20.010663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.677 [2024-11-06 11:10:20.010670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.677 00:28:28.677 Latency(us) 00:28:28.677 [2024-11-06T10:10:20.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.677 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:28.677 nvme0n1 : 2.00 20139.65 78.67 0.00 0.00 6349.96 2498.56 17585.49 00:28:28.677 [2024-11-06T10:10:20.099Z] =================================================================================================================== 00:28:28.677 [2024-11-06T10:10:20.099Z] Total : 20139.65 78.67 0.00 0.00 6349.96 2498.56 17585.49 00:28:28.677 { 00:28:28.677 "results": [ 00:28:28.677 { 00:28:28.677 "job": "nvme0n1", 00:28:28.677 "core_mask": "0x2", 00:28:28.677 "workload": "randread", 00:28:28.677 "status": "finished", 00:28:28.677 "queue_depth": 128, 00:28:28.677 "io_size": 4096, 00:28:28.677 "runtime": 2.003659, 00:28:28.677 "iops": 20139.65450208843, 00:28:28.677 "mibps": 78.67052539878293, 00:28:28.677 "io_failed": 0, 00:28:28.677 "io_timeout": 0, 00:28:28.677 "avg_latency_us": 6349.963745942061, 00:28:28.677 "min_latency_us": 2498.56, 00:28:28.677 "max_latency_us": 17585.493333333332 00:28:28.677 } 00:28:28.677 ], 00:28:28.677 "core_count": 1 00:28:28.677 } 00:28:28.677 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.677 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.677 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.677 | .driver_specific 00:28:28.677 | .nvme_error 00:28:28.677 | .status_code 00:28:28.677 | .command_transient_transport_error' 00:28:28.677 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3429542 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3429542 ']' 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3429542 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3429542 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:28.937 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3429542' 00:28:28.937 killing process with pid 3429542 00:28:28.938 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3429542 00:28:28.938 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.938 00:28:28.938 Latency(us) 00:28:28.938 [2024-11-06T10:10:20.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.938 [2024-11-06T10:10:20.360Z] =================================================================================================================== 00:28:28.938 [2024-11-06T10:10:20.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.938 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3429542 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3430227 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3430227 /var/tmp/bperf.sock 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3430227 ']' 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:29.198 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.198 [2024-11-06 11:10:20.429471] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:29.198 [2024-11-06 11:10:20.429526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430227 ] 00:28:29.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.198 Zero copy mechanism will not be used. 00:28:29.198 [2024-11-06 11:10:20.511581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.198 [2024-11-06 11:10:20.540057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.138 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:30.138 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:30.138 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.138 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.138 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:30.139 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.139 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.139 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.139 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.139 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.399 nvme0n1 00:28:30.399 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:30.399 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.399 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.399 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.399 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:30.399 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.399 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.399 Zero copy mechanism will not be used. 00:28:30.399 Running I/O for 2 seconds... 00:28:30.399 [2024-11-06 11:10:21.741948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.741981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.741990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.399 [2024-11-06 11:10:21.753012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.753033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.753040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.399 [2024-11-06 11:10:21.764226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.764245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.764252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.399 [2024-11-06 11:10:21.775194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.775211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.775218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.399 [2024-11-06 11:10:21.786077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.786098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.786105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.399 [2024-11-06 11:10:21.796904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.796921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.796927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.399 [2024-11-06 11:10:21.805229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.805246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.805253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.399 [2024-11-06 11:10:21.814886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.399 [2024-11-06 11:10:21.814904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.399 [2024-11-06 11:10:21.814911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.660 [2024-11-06 11:10:21.826347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.660 [2024-11-06 11:10:21.826365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.660 [2024-11-06 11:10:21.826371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.660 [2024-11-06 11:10:21.835056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.660 [2024-11-06 11:10:21.835073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.660 [2024-11-06 11:10:21.835080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.660 [2024-11-06 11:10:21.846650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.660 [2024-11-06 11:10:21.846667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.660 [2024-11-06 11:10:21.846673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.660 [2024-11-06 11:10:21.858768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.660 [2024-11-06 11:10:21.858785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.660 [2024-11-06 11:10:21.858792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.660 [2024-11-06 11:10:21.871323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.660 [2024-11-06 11:10:21.871341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.660 [2024-11-06 11:10:21.871351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.660 [2024-11-06 11:10:21.884083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.884100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.884107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.894970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.894986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.894993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.903182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.903198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.903205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.910988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.911005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.911011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.922661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.922678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.922684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.932333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.932350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.932356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.942416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.942434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.942440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.950848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.950865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.950872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.960220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.960241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.960247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.970222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.970240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.970247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.980230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.980247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.980254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:21.989622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:21.989639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:21.989645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:22.002339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:22.002356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:22.002362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:22.014281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:22.014298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:22.014304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:22.027044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:22.027062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:22.027068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:22.039175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:22.039192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:22.039199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:22.051571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:22.051588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:22.051594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:22.063174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:22.063192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:22.063199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.661 [2024-11-06 11:10:22.072361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.661 [2024-11-06 11:10:22.072379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.661 [2024-11-06 11:10:22.072386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.082268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.082286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.082292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.092507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.092524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.092531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.102085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.102103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.102109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.112103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.112120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.112127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.121526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.121543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.121550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.129577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.129594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.129600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.136668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.136686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.136695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.146893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.146910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.146917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.157323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.157340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.157347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.164612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.164629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.164635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.175671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.175688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.175694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.185592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.185609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.185615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.197383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.197400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.197406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.208989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.209006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.209012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.221823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.221840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.221847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.235302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.235319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.235325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.248728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.248749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.248756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.261580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.261597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.261603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.274008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.274025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.274031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.286932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.286950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.922 [2024-11-06 11:10:22.286956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.922 [2024-11-06 11:10:22.299586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.922 [2024-11-06 11:10:22.299603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.923 [2024-11-06 11:10:22.299609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.923 [2024-11-06 11:10:22.312352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.923 [2024-11-06 11:10:22.312368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.923 [2024-11-06 11:10:22.312375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.923 [2024-11-06 11:10:22.324829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.923 [2024-11-06 11:10:22.324846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.923 [2024-11-06 11:10:22.324852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.923 [2024-11-06 11:10:22.334862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:30.923 [2024-11-06 11:10:22.334878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.923 [2024-11-06 11:10:22.334888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.183 [2024-11-06 11:10:22.346067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.183 [2024-11-06 11:10:22.346085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.183 [2024-11-06 11:10:22.346091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.183 [2024-11-06 11:10:22.354401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.183 [2024-11-06 11:10:22.354418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.183 [2024-11-06 11:10:22.354425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.183 [2024-11-06 11:10:22.365000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.183 [2024-11-06 11:10:22.365017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.183 [2024-11-06 11:10:22.365023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.183 [2024-11-06 11:10:22.372922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.183 [2024-11-06 11:10:22.372939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.183 [2024-11-06 11:10:22.372946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.381570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.381587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.393018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.393036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.393042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.403344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.403362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.403368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.413854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.413872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.413879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.425123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.425145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.425151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.435704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.435722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.435729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.447486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.447504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.447511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.455065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.455083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.455089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.464464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.464482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.464489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.474876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.474902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.474908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.482782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.482800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.482807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.490245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.490263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.490269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.499776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.499795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.499801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.509141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.509159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.509165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.517234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.517253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.517259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.527398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.527416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.527422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.535593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.535611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.535618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.545319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.545337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.545343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.555772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.555790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.555796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.564636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.564654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.564661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.575572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.575590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.575596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.581208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.581226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.581236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.587106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.587124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.587130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.593023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.593041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.593048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.184 [2024-11-06 11:10:22.600450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.184 [2024-11-06 11:10:22.600468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.184 [2024-11-06 11:10:22.600474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.608303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.608321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.608328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.614576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.614594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.614600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.620154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.620172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.620178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.625957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.625976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.625982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.634833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.634851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.634857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.644365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.644388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.644394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.654077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.654095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.654101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.662596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.662614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.662620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.672738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.672761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.672768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.682956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.682975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.682982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.691988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.692007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.692014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.702006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.702024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.702030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.709243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.709262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.709269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.718876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.718895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.718902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.473 [2024-11-06 11:10:22.727414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.473 [2024-11-06 11:10:22.727432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.473 [2024-11-06 11:10:22.727439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.473 3112.00 IOPS, 389.00 MiB/s [2024-11-06T10:10:22.896Z] [2024-11-06 11:10:22.736301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.736319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.736326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.746616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.746634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.746640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.756756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.756774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.756780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.765102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.765120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.765127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.773585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.773603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.773610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.780858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.780876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.780883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.789753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.789771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.789777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.797440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.797461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.797467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.803122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.803140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.803147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.809775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.809794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.809801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.819693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.819711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.819717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.829349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.829367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.829373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.840011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.840029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.840035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.850888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.850907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.850913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.862589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.862607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.862614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.474 [2024-11-06 11:10:22.874035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.474 [2024-11-06 11:10:22.874054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.474 [2024-11-06 11:10:22.874060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.771 [2024-11-06 11:10:22.881986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.771 [2024-11-06 11:10:22.882005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.771 [2024-11-06 11:10:22.882011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.771 [2024-11-06 11:10:22.889874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.771 [2024-11-06 11:10:22.889893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.771 [2024-11-06 11:10:22.889899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.771 [2024-11-06 11:10:22.897048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.771 [2024-11-06 11:10:22.897067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.771 [2024-11-06 11:10:22.897074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.771 [2024-11-06 11:10:22.906310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.771 [2024-11-06 11:10:22.906328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.771 [2024-11-06 11:10:22.906335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.912327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.912346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.912352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.917787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.917805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.917812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.928644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.928670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.938874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.938892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.948779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.948797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.948810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.954540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.954559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.954565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.959931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.959948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.959954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.968477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.968495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.968502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.978074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.978092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.978098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.986436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.986454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.986460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:22.994404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:22.994422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:22.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.000553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.000571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.000578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.008229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.008247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.008253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.014168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.014189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.014195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.024322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.024339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.024345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.030458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.030475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.030482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.039259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.039276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.039282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.049832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.049849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.049855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.062985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.063002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.063010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.075050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.075068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.075074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.083067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.083084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.083090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.093608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.093625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.093631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.102924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.102942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.102948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.111796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.111813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.111819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.122950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.122967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.122974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.131387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.131405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.772 [2024-11-06 11:10:23.131412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.772 [2024-11-06 11:10:23.140978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.772 [2024-11-06 11:10:23.140996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.773 [2024-11-06 11:10:23.141002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.773 [2024-11-06 11:10:23.150466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.773 [2024-11-06 11:10:23.150484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.773 [2024-11-06 11:10:23.150491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.773 [2024-11-06 11:10:23.159933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.773 [2024-11-06 11:10:23.159951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.773 [2024-11-06 11:10:23.159957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.773 [2024-11-06 11:10:23.170918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.773 [2024-11-06 11:10:23.170936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.773 [2024-11-06 11:10:23.170942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.773 [2024-11-06 11:10:23.180956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:31.773 [2024-11-06 11:10:23.180975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.773 [2024-11-06 11:10:23.180984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.051 [2024-11-06 11:10:23.186577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.051 [2024-11-06 11:10:23.186596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.051 [2024-11-06 11:10:23.186602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.051 [2024-11-06 11:10:23.196097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.051 [2024-11-06 11:10:23.196115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.051 [2024-11-06 11:10:23.196121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.051 [2024-11-06 11:10:23.201629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.051 [2024-11-06 11:10:23.201646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.051 [2024-11-06 11:10:23.201652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.051 [2024-11-06 11:10:23.209588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.051 [2024-11-06 11:10:23.209606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.051 [2024-11-06 11:10:23.209613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.051 [2024-11-06 11:10:23.220056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.051 [2024-11-06 11:10:23.220075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.051 [2024-11-06 11:10:23.220081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.051 [2024-11-06 11:10:23.228656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.051 [2024-11-06 11:10:23.228674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.051 [2024-11-06 11:10:23.228680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.239242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.239261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.239267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.251319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.251338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.251345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.261511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.261533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.261539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.273878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.273897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.273903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.282037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.282055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.282061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.289969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.289987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.289993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.298377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.298394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.298400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.307110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.307128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.307135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.318092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.318110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.318117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.329889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.329907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.329914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.342545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.342563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.342569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.352930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.352949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.352955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.362420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.362438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.362444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.375020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.375038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.375045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.388167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.388185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.388192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.400118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.400136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.400143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.406141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.406158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.406165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.410965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.410983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.410989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.418521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.418538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.418545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.428818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.428839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.428846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.436502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.436520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.436527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.444670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.444688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.444694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.452195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.452213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.452219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.052 [2024-11-06 11:10:23.459995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.052 [2024-11-06 11:10:23.460013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.052 [2024-11-06 11:10:23.460019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.314 [2024-11-06 11:10:23.471291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.314 [2024-11-06 11:10:23.471310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.314 [2024-11-06 11:10:23.471316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.314 [2024-11-06 11:10:23.481180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.314 [2024-11-06 11:10:23.481198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.487564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.487582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.487588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.499054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.499073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.499079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.510260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.510278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.510285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.519851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.519868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.519874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.524967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.524985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.524991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.532004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.532022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.532029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.539033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.539051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.539058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.549230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.549248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.549254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.555901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.555918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.555924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.566100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.566118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.566124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.574905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.574923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.574933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.583822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.583840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.583846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.593493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.593511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.593518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.603628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.603645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.603651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.611829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.611847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.611854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.617205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.617224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.617230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.628270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.628289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.628295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.637612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.637630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.637637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.647567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.647586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.647592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.658086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.658108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.658115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.667903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.667921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.667927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.679308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.679327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.679333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.691119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.691138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.691144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.702097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.702116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.702122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.712668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.712687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.712694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.315 [2024-11-06 11:10:23.722846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.315 [2024-11-06 11:10:23.722866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.315 [2024-11-06 11:10:23.722873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.315 3252.00 IOPS, 406.50 MiB/s [2024-11-06T10:10:23.737Z] [2024-11-06 11:10:23.733595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13e0a20) 00:28:32.316 [2024-11-06 11:10:23.733610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.316 [2024-11-06 11:10:23.733616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.577 00:28:32.577 Latency(us) 00:28:32.577 [2024-11-06T10:10:23.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.577 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:32.577 nvme0n1 : 2.00 3256.22 407.03 0.00 0.00 4909.12 1044.48 13707.95 00:28:32.577 [2024-11-06T10:10:23.999Z] =================================================================================================================== 00:28:32.577 [2024-11-06T10:10:23.999Z] Total : 3256.22 407.03 0.00 0.00 4909.12 1044.48 13707.95 00:28:32.577 { 00:28:32.577 "results": [ 00:28:32.577 { 00:28:32.577 "job": "nvme0n1", 00:28:32.577 "core_mask": "0x2", 00:28:32.577 "workload": "randread", 00:28:32.577 "status": "finished", 00:28:32.577 "queue_depth": 16, 00:28:32.577 "io_size": 131072, 00:28:32.577 "runtime": 2.002321, 00:28:32.577 "iops": 3256.221155349217, 00:28:32.577 "mibps": 407.02764441865213, 00:28:32.577 "io_failed": 0, 00:28:32.577 "io_timeout": 0, 00:28:32.577 "avg_latency_us": 4909.124580777096, 00:28:32.577 "min_latency_us": 1044.48, 00:28:32.577 "max_latency_us": 13707.946666666667 00:28:32.577 } 00:28:32.577 ], 00:28:32.577 "core_count": 1 00:28:32.577 } 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:32.577 | .driver_specific 00:28:32.577 | .nvme_error 00:28:32.577 | .status_code 00:28:32.577 | .command_transient_transport_error' 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3430227 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3430227 ']' 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3430227 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:32.577 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3430227 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3430227' 00:28:32.838 killing process with pid 3430227 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3430227 00:28:32.838 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.838 00:28:32.838 Latency(us) 00:28:32.838 [2024-11-06T10:10:24.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.838 [2024-11-06T10:10:24.260Z] =================================================================================================================== 00:28:32.838 [2024-11-06T10:10:24.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3430227 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3430922 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3430922 /var/tmp/bperf.sock 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3430922 ']' 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:32.838 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.838 [2024-11-06 11:10:24.151709] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:32.838 [2024-11-06 11:10:24.151779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430922 ] 00:28:32.838 [2024-11-06 11:10:24.235578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.099 [2024-11-06 11:10:24.265181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.670 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:33.670 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:33.670 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.670 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.931 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:33.931 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.931 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.931 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.931 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.931 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.191 nvme0n1 00:28:34.191 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:34.191 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.191 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.191 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.191 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.191 11:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.191 Running I/O for 2 seconds... 00:28:34.191 [2024-11-06 11:10:25.492301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e5658 00:28:34.191 [2024-11-06 11:10:25.493201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.191 [2024-11-06 11:10:25.493229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:34.191 [2024-11-06 11:10:25.504496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e4578 00:28:34.191 [2024-11-06 11:10:25.505531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.191 [2024-11-06 11:10:25.505548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:34.191 [2024-11-06 11:10:25.518041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166efae0 00:28:34.191 [2024-11-06 11:10:25.519594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.519611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:34.192 [2024-11-06 11:10:25.528382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f0350 00:28:34.192 [2024-11-06 11:10:25.529330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.529347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:34.192 [2024-11-06 11:10:25.542012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e5ec8 00:28:34.192 [2024-11-06 11:10:25.543593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.543610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:34.192 [2024-11-06 11:10:25.551682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e12d8 00:28:34.192 [2024-11-06 11:10:25.552603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.552619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:34.192 [2024-11-06 11:10:25.566053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e01f8 00:28:34.192 [2024-11-06 11:10:25.567594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.567611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:34.192 [2024-11-06 11:10:25.575700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e6fa8 00:28:34.192 [2024-11-06 11:10:25.576626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.576642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:34.192 [2024-11-06 11:10:25.590120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e5ec8 00:28:34.192 [2024-11-06 11:10:25.591704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.591720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:34.192 [2024-11-06 11:10:25.600571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:34.192 [2024-11-06 11:10:25.601520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.192 [2024-11-06 11:10:25.601536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.612531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:34.453 [2024-11-06 11:10:25.613472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.613488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.624501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:34.453 [2024-11-06 11:10:25.625433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.625450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.636445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:34.453 [2024-11-06 11:10:25.637381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.637397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.648387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:34.453 [2024-11-06 11:10:25.649276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.649292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.660354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0ea0 00:28:34.453 [2024-11-06 11:10:25.661274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.661291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.672319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f0bc0 00:28:34.453 [2024-11-06 11:10:25.673254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.673271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.683519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e73e0 00:28:34.453 [2024-11-06 11:10:25.684422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.684438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.696268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fb048 00:28:34.453 [2024-11-06 11:10:25.697160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.697179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.708523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0ea0 00:28:34.453 [2024-11-06 11:10:25.709447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.709464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.719690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:34.453 [2024-11-06 11:10:25.720587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.720602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.732450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fb8b8 00:28:34.453 [2024-11-06 11:10:25.733355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.733371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.744421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e73e0 00:28:34.453 [2024-11-06 11:10:25.745356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.745372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.756437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9f68 00:28:34.453 [2024-11-06 11:10:25.757350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.757366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.768398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e84c0 00:28:34.453 [2024-11-06 11:10:25.769301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.769317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.781959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1430 00:28:34.453 [2024-11-06 11:10:25.783478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.783494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.792418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0630 00:28:34.453 [2024-11-06 11:10:25.793332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.793349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.805956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fb048 00:28:34.453 [2024-11-06 11:10:25.807517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.807533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.816395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9f68 00:28:34.453 [2024-11-06 11:10:25.817314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.817331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.829926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1430 00:28:34.453 [2024-11-06 11:10:25.831480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.831497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.840338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0630 00:28:34.453 [2024-11-06 11:10:25.841266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.841282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.853813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:34.453 [2024-11-06 11:10:25.855360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.855376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:34.453 [2024-11-06 11:10:25.865652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0ea0 00:28:34.453 [2024-11-06 11:10:25.867181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.453 [2024-11-06 11:10:25.867197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.876048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa3a0 00:28:34.715 [2024-11-06 11:10:25.876922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.876938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.887992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa3a0 00:28:34.715 [2024-11-06 11:10:25.888873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.888892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.899951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa3a0 00:28:34.715 [2024-11-06 11:10:25.900819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.900835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.911906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa3a0 00:28:34.715 [2024-11-06 11:10:25.912785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.912803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.923850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e7c50 00:28:34.715 [2024-11-06 11:10:25.924733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.924752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.935048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9b30 00:28:34.715 [2024-11-06 11:10:25.935919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.935935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.947778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9b30 00:28:34.715 [2024-11-06 11:10:25.948657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.948673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.959719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9b30 00:28:34.715 [2024-11-06 11:10:25.960592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.960608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.971634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dece0 00:28:34.715 [2024-11-06 11:10:25.972505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.972522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.985179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa3a0 00:28:34.715 [2024-11-06 11:10:25.986675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.986691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:25.994809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e23b8 00:28:34.715 [2024-11-06 11:10:25.995663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:25.995678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.007513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e23b8 00:28:34.715 [2024-11-06 11:10:26.008374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.008393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.019469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e23b8 00:28:34.715 [2024-11-06 11:10:26.020328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.020344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.031393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e23b8 00:28:34.715 [2024-11-06 11:10:26.032247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.032263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.044940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e23b8 00:28:34.715 [2024-11-06 11:10:26.046436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.046452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.057682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dece0 00:28:34.715 [2024-11-06 11:10:26.059209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.059225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.067329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9b30 00:28:34.715 [2024-11-06 11:10:26.068193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.068210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.080882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:34.715 [2024-11-06 11:10:26.082377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.082393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.091318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9b30 00:28:34.715 [2024-11-06 11:10:26.092134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.092150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.104991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e23b8 00:28:34.715 [2024-11-06 11:10:26.106483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.106498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.117795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e7c50 00:28:34.715 [2024-11-06 11:10:26.119310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.119329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.715 [2024-11-06 11:10:26.127427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dece0 00:28:34.715 [2024-11-06 11:10:26.128287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.715 [2024-11-06 11:10:26.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.141757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:34.977 [2024-11-06 11:10:26.143271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.143287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.151402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9b30 00:28:34.977 [2024-11-06 11:10:26.152227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.152243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.165690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de038 00:28:34.977 [2024-11-06 11:10:26.167207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.167223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.175349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166ddc00 00:28:34.977 [2024-11-06 11:10:26.176207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.176223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.189679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0ea0 00:28:34.977 [2024-11-06 11:10:26.191201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.191217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.199307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dece0 00:28:34.977 [2024-11-06 11:10:26.200174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.200191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.212863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:34.977 [2024-11-06 11:10:26.214362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.214378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.225604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166ddc00 00:28:34.977 [2024-11-06 11:10:26.227098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.227114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.235253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9b30 00:28:34.977 [2024-11-06 11:10:26.236135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.236152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.249602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0ea0 00:28:34.977 [2024-11-06 11:10:26.251127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.251144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.260805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fac10 00:28:34.977 [2024-11-06 11:10:26.262299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.262315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.271249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:34.977 [2024-11-06 11:10:26.272108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.272125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.284847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de038 00:28:34.977 [2024-11-06 11:10:26.286318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.286334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.295290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:34.977 [2024-11-06 11:10:26.296170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.296186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.311142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166ddc00 00:28:34.977 [2024-11-06 11:10:26.313249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.313266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.320791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fac10 00:28:34.977 [2024-11-06 11:10:26.322272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.322288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.333561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dece0 00:28:34.977 [2024-11-06 11:10:26.335064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.335080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.344768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e7c50 00:28:34.977 [2024-11-06 11:10:26.346229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.977 [2024-11-06 11:10:26.346245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.977 [2024-11-06 11:10:26.359084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:34.978 [2024-11-06 11:10:26.361232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.978 [2024-11-06 11:10:26.361248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.978 [2024-11-06 11:10:26.368763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dece0 00:28:34.978 [2024-11-06 11:10:26.370243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.978 [2024-11-06 11:10:26.370260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.978 [2024-11-06 11:10:26.383106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:34.978 [2024-11-06 11:10:26.385247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.978 [2024-11-06 11:10:26.385264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.978 [2024-11-06 11:10:26.393522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fac10 00:28:34.978 [2024-11-06 11:10:26.395000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.978 [2024-11-06 11:10:26.395017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.405470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1430 00:28:35.239 [2024-11-06 11:10:26.406926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.406942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.417444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:35.239 [2024-11-06 11:10:26.418926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.418942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.429440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e88f8 00:28:35.239 [2024-11-06 11:10:26.430920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.430940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.440660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166ddc00 00:28:35.239 [2024-11-06 11:10:26.442146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.442163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.453402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e88f8 00:28:35.239 [2024-11-06 11:10:26.454920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.454937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.464602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e7c50 00:28:35.239 [2024-11-06 11:10:26.466057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.466074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.477387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de038 00:28:35.239 [2024-11-06 11:10:26.478868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.478884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.239 21227.00 IOPS, 82.92 MiB/s [2024-11-06T10:10:26.661Z] [2024-11-06 11:10:26.490952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1ca0 00:28:35.239 [2024-11-06 11:10:26.493093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.493109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.501351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de470 00:28:35.239 [2024-11-06 11:10:26.502837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.239 [2024-11-06 11:10:26.502856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:35.239 [2024-11-06 11:10:26.512526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e7818 00:28:35.240 [2024-11-06 11:10:26.513984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.514000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.526861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166ddc00 00:28:35.240 [2024-11-06 11:10:26.528985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.529002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.537259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e6fa8 00:28:35.240 [2024-11-06 11:10:26.538754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.538771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.548432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1b48 00:28:35.240 [2024-11-06 11:10:26.549896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.549913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.562777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1ca0 00:28:35.240 [2024-11-06 11:10:26.564882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.564899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.573209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:35.240 [2024-11-06 11:10:26.574676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.574693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.586725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1430 00:28:35.240 [2024-11-06 11:10:26.588841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.588857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.596388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e8088 00:28:35.240 [2024-11-06 11:10:26.597841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.597858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.609143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:35.240 [2024-11-06 11:10:26.610619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.610635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.622688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1ca0 00:28:35.240 [2024-11-06 11:10:26.624806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.624822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.632352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e8088 00:28:35.240 [2024-11-06 11:10:26.633806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.633822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.645131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:35.240 [2024-11-06 11:10:26.646611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.240 [2024-11-06 11:10:26.646628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.240 [2024-11-06 11:10:26.658658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:35.502 [2024-11-06 11:10:26.660764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.660780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.669041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e27f0 00:28:35.502 [2024-11-06 11:10:26.670504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.670521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.682558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:35.502 [2024-11-06 11:10:26.684633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.684649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.692214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e6fa8 00:28:35.502 [2024-11-06 11:10:26.693656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.693672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.705184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e27f0 00:28:35.502 [2024-11-06 11:10:26.706665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.706682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.716457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e7c50 00:28:35.502 [2024-11-06 11:10:26.717910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.717926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.730722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:35.502 [2024-11-06 11:10:26.732807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.732823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.741189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fac10 00:28:35.502 [2024-11-06 11:10:26.742639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.742658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.752388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f0ff8 00:28:35.502 [2024-11-06 11:10:26.753811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.753828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.765177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de038 00:28:35.502 [2024-11-06 11:10:26.766632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.766649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.776379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e8088 00:28:35.502 [2024-11-06 11:10:26.777809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.777825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.789162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fac10 00:28:35.502 [2024-11-06 11:10:26.790633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.790650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.801155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e01f8 00:28:35.502 [2024-11-06 11:10:26.802613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.802630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.814692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f9f68 00:28:35.502 [2024-11-06 11:10:26.816801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.816817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.825155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de038 00:28:35.502 [2024-11-06 11:10:26.826601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.826617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.502 [2024-11-06 11:10:26.836392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e8088 00:28:35.502 [2024-11-06 11:10:26.837808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.502 [2024-11-06 11:10:26.837824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:35.503 [2024-11-06 11:10:26.850734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:35.503 [2024-11-06 11:10:26.852810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.503 [2024-11-06 11:10:26.852828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.503 [2024-11-06 11:10:26.860409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f0788 00:28:35.503 [2024-11-06 11:10:26.861845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.503 [2024-11-06 11:10:26.861862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:35.503 [2024-11-06 11:10:26.873207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1f80 00:28:35.503 [2024-11-06 11:10:26.874671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.503 [2024-11-06 11:10:26.874688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.503 [2024-11-06 11:10:26.884412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166de8a8 00:28:35.503 [2024-11-06 11:10:26.885808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.503 [2024-11-06 11:10:26.885824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:35.503 [2024-11-06 11:10:26.897182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e27f0 00:28:35.503 [2024-11-06 11:10:26.898636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.503 [2024-11-06 11:10:26.898653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.503 [2024-11-06 11:10:26.910712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f0ff8 00:28:35.503 [2024-11-06 11:10:26.912808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.503 [2024-11-06 11:10:26.912824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.503 [2024-11-06 11:10:26.921179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e6738 00:28:35.765 [2024-11-06 11:10:26.922627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:26.922643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:26.933182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e1710 00:28:35.765 [2024-11-06 11:10:26.934610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:26.934627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:26.945167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fac10 00:28:35.765 [2024-11-06 11:10:26.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:26.946636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:26.958713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f0788 00:28:35.765 [2024-11-06 11:10:26.960805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:26.960821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:26.969175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e5ec8 00:28:35.765 [2024-11-06 11:10:26.970626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:26.970643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:26.982710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e23b8 00:28:35.765 [2024-11-06 11:10:26.984808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:26.984825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:26.992376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166eff18 00:28:35.765 [2024-11-06 11:10:26.993814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:26.993830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:27.006684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f0ff8 00:28:35.765 [2024-11-06 11:10:27.008777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:27.008793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:27.015014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166ec840 00:28:35.765 [2024-11-06 11:10:27.016083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:27.016099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:27.029356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166eb760 00:28:35.765 [2024-11-06 11:10:27.031076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.765 [2024-11-06 11:10:27.031092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:35.765 [2024-11-06 11:10:27.039792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f92c0 00:28:35.765 [2024-11-06 11:10:27.040877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.040894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.051802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dfdc0 00:28:35.766 [2024-11-06 11:10:27.052878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.052897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.063033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fd208 00:28:35.766 [2024-11-06 11:10:27.064105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.064122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.077313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fef90 00:28:35.766 [2024-11-06 11:10:27.079038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.079054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.087760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f1ca0 00:28:35.766 [2024-11-06 11:10:27.088805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.088821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.099719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166ebfd0 00:28:35.766 [2024-11-06 11:10:27.100787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.100803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.111713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166eaef0 00:28:35.766 [2024-11-06 11:10:27.112780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.112797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.123723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166dfdc0 00:28:35.766 [2024-11-06 11:10:27.124813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.124829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.135732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166df118 00:28:35.766 [2024-11-06 11:10:27.136808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.136824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.149258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fc998 00:28:35.766 [2024-11-06 11:10:27.150943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.150959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.158915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166eaef0 00:28:35.766 [2024-11-06 11:10:27.159985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.160001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.171675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e9e10 00:28:35.766 [2024-11-06 11:10:27.172748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.172764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:35.766 [2024-11-06 11:10:27.183677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166df118 00:28:35.766 [2024-11-06 11:10:27.184764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.766 [2024-11-06 11:10:27.184780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:36.027 [2024-11-06 11:10:27.195653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fb8b8 00:28:36.027 [2024-11-06 11:10:27.196716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.027 [2024-11-06 11:10:27.196732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:36.027 [2024-11-06 11:10:27.209175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fc998 00:28:36.027 [2024-11-06 11:10:27.210913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.027 [2024-11-06 11:10:27.210928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:36.027 [2024-11-06 11:10:27.218852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166eaef0 00:28:36.028 [2024-11-06 11:10:27.219931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.219946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.231641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e9e10 00:28:36.028 [2024-11-06 11:10:27.232711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.232727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.242802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f5be8 00:28:36.028 [2024-11-06 11:10:27.243860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.243876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.254750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e9e10 00:28:36.028 [2024-11-06 11:10:27.255792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.255808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.267495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e8d30 00:28:36.028 [2024-11-06 11:10:27.268545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.268562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.279503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.280569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.280584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.291508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa3a0 00:28:36.028 [2024-11-06 11:10:27.292555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.292571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.303494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0ea0 00:28:36.028 [2024-11-06 11:10:27.304543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.304560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.315522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f6458 00:28:36.028 [2024-11-06 11:10:27.316587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.316604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.327472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e9e10 00:28:36.028 [2024-11-06 11:10:27.328520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.328536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.339468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.340540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.340556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.353038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e0ea0 00:28:36.028 [2024-11-06 11:10:27.354738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.354756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.362698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e9e10 00:28:36.028 [2024-11-06 11:10:27.363749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.363768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.375430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.376484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.376500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.387416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.388466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.388482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.399357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.400406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.400422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.411335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.412388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.412404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.423314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.424361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.424377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.435251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.028 [2024-11-06 11:10:27.436306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.028 [2024-11-06 11:10:27.436322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.028 [2024-11-06 11:10:27.447208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166e9e10 00:28:36.289 [2024-11-06 11:10:27.448265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.289 [2024-11-06 11:10:27.448282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:36.289 [2024-11-06 11:10:27.460728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166f6458 00:28:36.289 [2024-11-06 11:10:27.462421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.289 [2024-11-06 11:10:27.462436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.289 [2024-11-06 11:10:27.471189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa3a0 00:28:36.289 [2024-11-06 11:10:27.472241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.289 [2024-11-06 11:10:27.472261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:36.289 [2024-11-06 11:10:27.483176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19520) with pdu=0x2000166fa7d8 00:28:36.289 [2024-11-06 11:10:27.484947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.289 [2024-11-06 11:10:27.484964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:36.289 21270.50 IOPS, 83.09 MiB/s 00:28:36.289 Latency(us) 00:28:36.289 [2024-11-06T10:10:27.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.289 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.289 nvme0n1 : 2.01 21293.42 83.18 0.00 0.00 6003.66 2061.65 14745.60 00:28:36.289 [2024-11-06T10:10:27.711Z] =================================================================================================================== 00:28:36.289 [2024-11-06T10:10:27.711Z] Total : 21293.42 83.18 0.00 0.00 6003.66 2061.65 14745.60 00:28:36.289 { 00:28:36.289 "results": [ 00:28:36.289 { 00:28:36.289 "job": "nvme0n1", 00:28:36.289 "core_mask": "0x2", 00:28:36.289 "workload": "randwrite", 00:28:36.289 "status": "finished", 00:28:36.289 "queue_depth": 128, 00:28:36.289 "io_size": 4096, 00:28:36.289 "runtime": 2.00677, 00:28:36.289 "iops": 21293.421767317628, 00:28:36.289 "mibps": 83.17742877858448, 00:28:36.289 "io_failed": 0, 00:28:36.289 "io_timeout": 0, 00:28:36.289 "avg_latency_us": 6003.658967962369, 00:28:36.289 "min_latency_us": 2061.653333333333, 00:28:36.289 "max_latency_us": 14745.6 00:28:36.290 } 00:28:36.290 ], 00:28:36.290 "core_count": 1 00:28:36.290 } 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.290 | .driver_specific 00:28:36.290 | .nvme_error 00:28:36.290 | .status_code 00:28:36.290 | .command_transient_transport_error' 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3430922 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3430922 ']' 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3430922 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:36.290 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3430922 00:28:36.550 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:36.550 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:36.550 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3430922' 00:28:36.550 killing process with pid 3430922 00:28:36.550 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3430922 00:28:36.550 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.550 00:28:36.550 Latency(us) 00:28:36.550 [2024-11-06T10:10:27.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.550 [2024-11-06T10:10:27.972Z] =================================================================================================================== 00:28:36.550 [2024-11-06T10:10:27.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.550 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3430922 00:28:36.550 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3431601 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3431601 /var/tmp/bperf.sock 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3431601 ']' 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.551 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.551 [2024-11-06 11:10:27.907582] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:36.551 [2024-11-06 11:10:27.907638] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431601 ] 00:28:36.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.551 Zero copy mechanism will not be used. 00:28:36.812 [2024-11-06 11:10:27.990590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.812 [2024-11-06 11:10:28.018501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.385 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.385 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:37.385 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.385 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.645 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:37.645 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.645 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.645 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.645 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.645 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.906 nvme0n1 00:28:37.906 11:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:37.906 11:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.906 11:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.906 11:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.906 11:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:37.906 11:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.167 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.167 Zero copy mechanism will not be used. 00:28:38.167 Running I/O for 2 seconds... 00:28:38.167 [2024-11-06 11:10:29.375154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.375370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.375398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.379452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.379653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.379674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.386375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.386691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.386711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.394610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.395009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.395028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.403760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.404049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.404067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.412019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.412299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.412317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.419948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.420263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.420282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.428446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.428635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.428653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.434844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.435170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.435187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.441761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.441953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.441969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.450254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.450516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.450534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.457507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.457832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.457850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.465236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.465427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.465444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.472027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.472319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.472338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.479784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.479994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.480014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.488971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.489294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.489311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.495060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.495249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.495265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.500753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.500934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.500951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.508387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.508566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.508582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.513763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.513943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.513960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.521501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.521770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.529453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.529764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.529782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.536656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.536873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.536890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.545040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.545281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.545307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.554733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.555076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.555094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.563761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.564100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.564118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.571360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.571545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.571565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.577999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.578181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.578198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.167 [2024-11-06 11:10:29.586488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.167 [2024-11-06 11:10:29.586736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.167 [2024-11-06 11:10:29.586759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.594640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.594856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.594872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.604702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.605002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.605020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.615162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.615410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.615426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.624038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.624332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.624350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.632470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.632873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.632891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.641217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.641512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.641530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.649572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.649796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.649813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.657443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.657708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.657727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.665214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.665492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.665509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.674206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.674389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.674405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.681539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.681739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.681760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.689687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.689998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.690019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.698361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.698597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.698614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.705995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.706223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.706240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.714588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.714891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.714910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.721593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.721875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.721893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.728244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.728422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.735697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.735999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.736017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.742571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.430 [2024-11-06 11:10:29.742876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.430 [2024-11-06 11:10:29.742895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.430 [2024-11-06 11:10:29.752356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.752633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.752652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.758630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.758933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.758951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.765390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.765572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.765589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.771795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.772055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.772073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.778065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.778243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.778259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.785645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.785896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.791625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.791845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.791862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.800300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.800609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.800626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.806426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.806604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.806621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.814837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.815192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.815209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.821217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.821540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.821558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.828981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.829285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.829303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.836249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.836440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.836457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.431 [2024-11-06 11:10:29.843369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.431 [2024-11-06 11:10:29.843557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.431 [2024-11-06 11:10:29.843574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.695 [2024-11-06 11:10:29.849195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.849520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.849537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.854445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.854624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.854641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.859679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.859862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.859880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.867501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.867859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.867877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.872868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.873067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.873087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.882496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.882782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.882800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.889466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.889675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.889692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.897098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.897308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.897324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.905292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.905477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.905493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.914716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.914986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.915003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.919489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.919662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.919679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.923127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.923300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.923317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.931282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.931536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.931554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.935764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.935942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.935958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.943379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.943687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.943705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.947959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.948133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.955190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.955369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.955386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.962130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.962392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.962410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.970925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.971147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.971164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.976582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.976761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.976778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.982948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.983193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.983209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:29.992847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:29.993150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:29.993168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:30.003375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:30.003619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:30.003646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:30.014574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:30.014779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:30.014801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:30.020092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:30.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:30.020284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.696 [2024-11-06 11:10:30.028155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.696 [2024-11-06 11:10:30.028334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.696 [2024-11-06 11:10:30.028352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.036211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.036389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.036406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.043737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.044050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.044069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.052737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.053091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.053110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.058201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.058577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.058596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.064714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.064903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.064928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.072252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.072429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.072448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.076666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.076967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.076986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.081805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.081980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.081997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.089527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.089823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.089841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.096327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.096561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.096578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.104896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.105192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.105211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.109291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.109465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.109482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.697 [2024-11-06 11:10:30.113114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.697 [2024-11-06 11:10:30.113288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-06 11:10:30.113305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.958 [2024-11-06 11:10:30.118046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.958 [2024-11-06 11:10:30.118257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-06 11:10:30.118274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.958 [2024-11-06 11:10:30.126432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.958 [2024-11-06 11:10:30.126735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-06 11:10:30.126757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.958 [2024-11-06 11:10:30.136185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.958 [2024-11-06 11:10:30.136360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-06 11:10:30.136377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.958 [2024-11-06 11:10:30.144742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.958 [2024-11-06 11:10:30.145031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-06 11:10:30.145049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.958 [2024-11-06 11:10:30.154252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.958 [2024-11-06 11:10:30.154499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-06 11:10:30.154516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.958 [2024-11-06 11:10:30.162392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.958 [2024-11-06 11:10:30.162718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-06 11:10:30.162735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.958 [2024-11-06 11:10:30.167856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.167974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.167991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.175827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.176171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.176187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.182323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.182476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.182493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.189882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.190123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.190140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.196878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.197020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.197037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.204907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.205034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.205050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.211101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.211213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.211230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.214796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.214905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.214921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.220889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.220985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.221000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.225163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.225291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.225307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.229052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.229142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.229158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.232651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.232801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.232820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.236419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.236533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.236549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.240242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.240367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.240383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.244137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.244282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.244298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.247947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.248097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.248114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.251756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.251867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.251883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.255500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.255626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.255642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.259197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.259342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.259359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.262979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.263134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.263151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.266801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.266947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.266963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.270552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.270693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.270710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.274415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.274531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.274547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.278019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.278123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.278139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.283010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.283305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.283322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.293250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.293502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.293519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.303365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.303546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-06 11:10:30.303562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-06 11:10:30.313697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.959 [2024-11-06 11:10:30.314022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-11-06 11:10:30.314039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.960 [2024-11-06 11:10:30.324793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.960 [2024-11-06 11:10:30.325032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-11-06 11:10:30.325051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.960 [2024-11-06 11:10:30.335777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.960 [2024-11-06 11:10:30.335888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-11-06 11:10:30.335904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.960 [2024-11-06 11:10:30.346377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.960 [2024-11-06 11:10:30.346666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-11-06 11:10:30.346684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.960 [2024-11-06 11:10:30.357268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.960 [2024-11-06 11:10:30.357530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-11-06 11:10:30.357548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.960 [2024-11-06 11:10:30.367709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:38.960 [2024-11-06 11:10:30.367929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.960 [2024-11-06 11:10:30.367945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 4340.00 IOPS, 542.50 MiB/s [2024-11-06T10:10:30.644Z] [2024-11-06 11:10:30.379031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.379288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.379304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.389516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.389662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.389678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.400554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.400852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.400870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.407274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.407404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.407420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.412019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.412112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.412128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.417404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.417496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.417512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.421365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.421455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.421471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.424911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.425004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.425019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.428441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.428531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.428547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.431955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.432043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.432058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.435437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.435528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.435544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.438888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.438978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.222 [2024-11-06 11:10:30.438994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.222 [2024-11-06 11:10:30.444879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.222 [2024-11-06 11:10:30.444985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.445001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.450927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.451000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.451016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.456310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.456381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.456397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.460182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.460248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.460264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.464129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.464194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.464210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.468819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.468884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.468900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.472307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.472372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.472389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.476428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.476493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.476509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.479980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.480045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.480061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.483778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.483845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.483864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.490557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.490816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.490832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.498145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.498420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.498437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.505926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.506113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.506129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.514203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.514269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.514285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.519885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.520068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.520083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.527464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.527531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.527546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.532299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.532584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.532602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.538341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.538430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.538445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.544470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.544778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.544795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.554282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.554438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.554454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.564343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.564583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.564599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.574683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.574933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.574951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.583980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.584224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.584240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.593623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.593934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.593952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.603578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.603690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.603706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.607695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.607779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.607795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.613099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.613331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.613348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.223 [2024-11-06 11:10:30.621400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.223 [2024-11-06 11:10:30.621642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.223 [2024-11-06 11:10:30.621660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.224 [2024-11-06 11:10:30.629705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.224 [2024-11-06 11:10:30.629782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.224 [2024-11-06 11:10:30.629798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.224 [2024-11-06 11:10:30.636166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.224 [2024-11-06 11:10:30.636417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.224 [2024-11-06 11:10:30.636434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.224 [2024-11-06 11:10:30.640871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.224 [2024-11-06 11:10:30.640983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.224 [2024-11-06 11:10:30.640999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.486 [2024-11-06 11:10:30.646786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.486 [2024-11-06 11:10:30.647009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.486 [2024-11-06 11:10:30.647026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.486 [2024-11-06 11:10:30.656594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.486 [2024-11-06 11:10:30.656673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.486 [2024-11-06 11:10:30.656689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.486 [2024-11-06 11:10:30.666402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.486 [2024-11-06 11:10:30.666597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.666613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.676785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.677005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.677021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.687035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.687285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.697475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.697582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.697598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.702408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.702475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.702491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.706101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.706169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.706184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.711610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.711677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.711693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.719160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.719226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.719241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.725495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.725562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.725578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.730662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.730727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.730743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.737694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.737968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.737985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.744452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.744523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.744539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.751962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.752221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.752238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.760720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.760819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.760836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.768828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.769096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.769113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.777612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.777880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.777897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.784546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.784625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.784641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.792810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.792885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.792901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.799982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.800048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.800064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.804889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.805122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.805138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.810349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.810418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.810434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.817636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.817702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.817718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.824317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.824397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.824413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.829961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.830040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.830055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.836962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.837025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.837041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.840852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.487 [2024-11-06 11:10:30.840920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.487 [2024-11-06 11:10:30.840936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.487 [2024-11-06 11:10:30.845621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.845715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.845731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.849458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.849527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.849543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.852977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.853045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.853064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.856487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.856557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.856573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.859939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.860021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.860036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.863392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.863468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.863483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.866853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.866951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.866966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.870708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.870887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.870903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.880195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.880475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.880493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.890305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.890572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.890590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.488 [2024-11-06 11:10:30.900917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.488 [2024-11-06 11:10:30.901086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.488 [2024-11-06 11:10:30.901102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.750 [2024-11-06 11:10:30.911705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.750 [2024-11-06 11:10:30.911997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.750 [2024-11-06 11:10:30.912014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.750 [2024-11-06 11:10:30.922417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.750 [2024-11-06 11:10:30.922726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.750 [2024-11-06 11:10:30.922743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.750 [2024-11-06 11:10:30.932241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.750 [2024-11-06 11:10:30.932514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.750 [2024-11-06 11:10:30.932531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.750 [2024-11-06 11:10:30.942531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.750 [2024-11-06 11:10:30.942784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.750 [2024-11-06 11:10:30.942800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.750 [2024-11-06 11:10:30.953563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.750 [2024-11-06 11:10:30.953781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.750 [2024-11-06 11:10:30.953797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.750 [2024-11-06 11:10:30.964607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.750 [2024-11-06 11:10:30.964889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.750 [2024-11-06 11:10:30.964906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.750 [2024-11-06 11:10:30.973718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.750 [2024-11-06 11:10:30.973845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.750 [2024-11-06 11:10:30.973861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:30.977614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:30.977699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:30.977715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:30.983656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:30.983764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:30.983780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:30.987311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:30.987392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:30.987408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:30.990845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:30.990910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:30.990925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:30.994544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:30.994619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:30.994635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:30.998138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:30.998207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:30.998223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.001703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.001786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.001801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.005198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.005274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.005289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.008658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.008738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.008758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.012134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.012218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.012233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.015574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.015653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.015671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.019187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.019325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.019340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.025171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.025461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.025477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.034797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.035067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.035083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.044443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.044707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.054960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.055189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.055205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.065612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.065892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.065909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.075034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.075390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.075407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.084971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.085141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.085157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.095074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.095349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.095367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.104910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.105188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.105204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.115402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.115643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.115659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.126072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.126372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.126389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.136103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.136400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.136417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.145907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.146152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.146168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.154676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.154778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.154794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.159324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.751 [2024-11-06 11:10:31.159404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.751 [2024-11-06 11:10:31.159419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.751 [2024-11-06 11:10:31.163003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.752 [2024-11-06 11:10:31.163082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.752 [2024-11-06 11:10:31.163100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.752 [2024-11-06 11:10:31.166640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:39.752 [2024-11-06 11:10:31.166734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.752 [2024-11-06 11:10:31.166755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.170408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.170496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.170512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.174349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.174430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.174446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.177953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.178036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.178051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.181442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.181536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.181550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.184870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.184948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.184964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.188363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.188443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.188459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.191847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.191923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.191939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.198851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.199102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.199118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.203681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.203803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.203818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.207755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.208047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.208065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.217280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.217409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.217426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.223683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.223938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.223955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.232482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.232558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.232574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.237915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.238016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.238031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.244179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.244264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.244280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.248404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.248503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.248519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.254800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.254903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.254918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.260454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.260530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.260545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.265237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.265314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.265329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.270125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.270200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.270215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.275487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.275560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.275576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.282313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.282389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.282405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.289234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.289308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.289324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.294231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.294472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.294488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.301626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.301836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.014 [2024-11-06 11:10:31.301854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.014 [2024-11-06 11:10:31.307266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.014 [2024-11-06 11:10:31.307343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.307358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.015 [2024-11-06 11:10:31.313325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.015 [2024-11-06 11:10:31.313421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.313437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.015 [2024-11-06 11:10:31.319841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.015 [2024-11-06 11:10:31.319918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.319934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.015 [2024-11-06 11:10:31.328507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.015 [2024-11-06 11:10:31.328756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.328773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.015 [2024-11-06 11:10:31.339251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.015 [2024-11-06 11:10:31.339548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.339565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.015 [2024-11-06 11:10:31.349278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.015 [2024-11-06 11:10:31.349554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.349572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.015 [2024-11-06 11:10:31.359823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.015 [2024-11-06 11:10:31.360156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.360173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.015 [2024-11-06 11:10:31.370576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd19860) with pdu=0x2000166fef90 00:28:40.015 [2024-11-06 11:10:31.371861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.015 [2024-11-06 11:10:31.371879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.015 4510.00 IOPS, 563.75 MiB/s 00:28:40.015 Latency(us) 00:28:40.015 [2024-11-06T10:10:31.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.015 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:40.015 nvme0n1 : 2.01 4505.63 563.20 0.00 0.00 3544.48 1617.92 12069.55 00:28:40.015 [2024-11-06T10:10:31.437Z] =================================================================================================================== 00:28:40.015 [2024-11-06T10:10:31.437Z] Total : 4505.63 563.20 0.00 0.00 3544.48 1617.92 12069.55 00:28:40.015 { 00:28:40.015 "results": [ 00:28:40.015 { 00:28:40.015 "job": "nvme0n1", 00:28:40.015 "core_mask": "0x2", 00:28:40.015 "workload": "randwrite", 00:28:40.015 "status": "finished", 00:28:40.015 "queue_depth": 16, 00:28:40.015 "io_size": 131072, 00:28:40.015 "runtime": 2.006156, 00:28:40.015 "iops": 4505.631665732874, 00:28:40.015 "mibps": 563.2039582166093, 00:28:40.015 "io_failed": 0, 00:28:40.015 "io_timeout": 0, 00:28:40.015 "avg_latency_us": 3544.4772327322344, 00:28:40.015 "min_latency_us": 1617.92, 00:28:40.015 "max_latency_us": 12069.546666666667 00:28:40.015 } 00:28:40.015 ], 00:28:40.015 "core_count": 1 00:28:40.015 } 00:28:40.015 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:40.015 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:40.015 | .driver_specific 00:28:40.015 | .nvme_error 00:28:40.015 | .status_code 00:28:40.015 | .command_transient_transport_error' 00:28:40.015 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:40.015 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 291 > 0 )) 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3431601 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3431601 ']' 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3431601 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3431601 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3431601' 00:28:40.277 killing process with pid 3431601 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3431601 00:28:40.277 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.277 00:28:40.277 Latency(us) 00:28:40.277 [2024-11-06T10:10:31.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.277 [2024-11-06T10:10:31.699Z] =================================================================================================================== 00:28:40.277 [2024-11-06T10:10:31.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.277 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3431601 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3429201 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3429201 ']' 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3429201 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3429201 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3429201' 00:28:40.539 killing process with pid 3429201 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3429201 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3429201 00:28:40.539 00:28:40.539 real 0m16.292s 00:28:40.539 user 0m32.161s 00:28:40.539 sys 0m3.594s 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:40.539 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.539 ************************************ 00:28:40.539 END TEST nvmf_digest_error 00:28:40.539 ************************************ 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.801 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.801 rmmod nvme_tcp 00:28:40.801 rmmod nvme_fabrics 00:28:40.801 rmmod nvme_keyring 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3429201 ']' 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3429201 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3429201 ']' 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3429201 00:28:40.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3429201) - No such process 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3429201 is not found' 00:28:40.801 Process with pid 3429201 is not found 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.801 11:10:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.716 11:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.716 00:28:42.716 real 0m42.252s 00:28:42.716 user 1m6.851s 00:28:42.716 sys 0m12.514s 00:28:42.716 11:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.716 11:10:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:42.716 ************************************ 00:28:42.716 END TEST nvmf_digest 00:28:42.716 ************************************ 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.977 ************************************ 00:28:42.977 START TEST nvmf_bdevperf 00:28:42.977 ************************************ 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:42.977 * Looking for test storage... 00:28:42.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.977 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.978 --rc genhtml_branch_coverage=1 00:28:42.978 --rc genhtml_function_coverage=1 00:28:42.978 --rc genhtml_legend=1 00:28:42.978 --rc geninfo_all_blocks=1 00:28:42.978 --rc geninfo_unexecuted_blocks=1 00:28:42.978 00:28:42.978 ' 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.978 --rc genhtml_branch_coverage=1 00:28:42.978 --rc genhtml_function_coverage=1 00:28:42.978 --rc genhtml_legend=1 00:28:42.978 --rc geninfo_all_blocks=1 00:28:42.978 --rc geninfo_unexecuted_blocks=1 00:28:42.978 00:28:42.978 ' 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.978 --rc genhtml_branch_coverage=1 00:28:42.978 --rc genhtml_function_coverage=1 00:28:42.978 --rc genhtml_legend=1 00:28:42.978 --rc geninfo_all_blocks=1 00:28:42.978 --rc geninfo_unexecuted_blocks=1 00:28:42.978 00:28:42.978 ' 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.978 --rc genhtml_branch_coverage=1 00:28:42.978 --rc genhtml_function_coverage=1 00:28:42.978 --rc genhtml_legend=1 00:28:42.978 --rc geninfo_all_blocks=1 00:28:42.978 --rc geninfo_unexecuted_blocks=1 00:28:42.978 00:28:42.978 ' 00:28:42.978 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:43.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.240 11:10:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:51.382 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:51.382 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:51.382 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:51.382 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:28:51.382 00:28:51.382 --- 10.0.0.2 ping statistics --- 00:28:51.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.382 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:28:51.382 00:28:51.382 --- 10.0.0.1 ping statistics --- 00:28:51.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.382 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:51.382 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3436622 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3436622 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3436622 ']' 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:51.383 11:10:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 [2024-11-06 11:10:41.785380] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:51.383 [2024-11-06 11:10:41.785450] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.383 [2024-11-06 11:10:41.884615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.383 [2024-11-06 11:10:41.936403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.383 [2024-11-06 11:10:41.936458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.383 [2024-11-06 11:10:41.936467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.383 [2024-11-06 11:10:41.936475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.383 [2024-11-06 11:10:41.936482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.383 [2024-11-06 11:10:41.938260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.383 [2024-11-06 11:10:41.938425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.383 [2024-11-06 11:10:41.938426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 [2024-11-06 11:10:42.625075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 Malloc0 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.383 [2024-11-06 11:10:42.691817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.383 { 00:28:51.383 "params": { 00:28:51.383 "name": "Nvme$subsystem", 00:28:51.383 "trtype": "$TEST_TRANSPORT", 00:28:51.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.383 "adrfam": "ipv4", 00:28:51.383 "trsvcid": "$NVMF_PORT", 00:28:51.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.383 "hdgst": ${hdgst:-false}, 00:28:51.383 "ddgst": ${ddgst:-false} 00:28:51.383 }, 00:28:51.383 "method": "bdev_nvme_attach_controller" 00:28:51.383 } 00:28:51.383 EOF 00:28:51.383 )") 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:51.383 11:10:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:51.383 "params": { 00:28:51.383 "name": "Nvme1", 00:28:51.383 "trtype": "tcp", 00:28:51.383 "traddr": "10.0.0.2", 00:28:51.383 "adrfam": "ipv4", 00:28:51.383 "trsvcid": "4420", 00:28:51.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.383 "hdgst": false, 00:28:51.383 "ddgst": false 00:28:51.383 }, 00:28:51.383 "method": "bdev_nvme_attach_controller" 00:28:51.383 }' 00:28:51.383 [2024-11-06 11:10:42.746471] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:51.383 [2024-11-06 11:10:42.746524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436830 ] 00:28:51.643 [2024-11-06 11:10:42.817181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.643 [2024-11-06 11:10:42.853301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.643 Running I/O for 1 seconds... 00:28:53.026 8621.00 IOPS, 33.68 MiB/s 00:28:53.026 Latency(us) 00:28:53.026 [2024-11-06T10:10:44.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:53.026 Verification LBA range: start 0x0 length 0x4000 00:28:53.026 Nvme1n1 : 1.02 8613.86 33.65 0.00 0.00 14798.49 1856.85 14964.05 00:28:53.026 [2024-11-06T10:10:44.448Z] =================================================================================================================== 00:28:53.026 [2024-11-06T10:10:44.448Z] Total : 8613.86 33.65 0.00 0.00 14798.49 1856.85 14964.05 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3437031 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.026 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.026 { 00:28:53.026 "params": { 00:28:53.026 "name": "Nvme$subsystem", 00:28:53.026 "trtype": "$TEST_TRANSPORT", 00:28:53.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.026 "adrfam": "ipv4", 00:28:53.026 "trsvcid": "$NVMF_PORT", 00:28:53.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.027 "hdgst": ${hdgst:-false}, 00:28:53.027 "ddgst": ${ddgst:-false} 00:28:53.027 }, 00:28:53.027 "method": "bdev_nvme_attach_controller" 00:28:53.027 } 00:28:53.027 EOF 00:28:53.027 )") 00:28:53.027 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:53.027 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:53.027 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:53.027 11:10:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:53.027 "params": { 00:28:53.027 "name": "Nvme1", 00:28:53.027 "trtype": "tcp", 00:28:53.027 "traddr": "10.0.0.2", 00:28:53.027 "adrfam": "ipv4", 00:28:53.027 "trsvcid": "4420", 00:28:53.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.027 "hdgst": false, 00:28:53.027 "ddgst": false 00:28:53.027 }, 00:28:53.027 "method": "bdev_nvme_attach_controller" 00:28:53.027 }' 00:28:53.027 [2024-11-06 11:10:44.230628] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:53.027 [2024-11-06 11:10:44.230682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437031 ] 00:28:53.027 [2024-11-06 11:10:44.301483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.027 [2024-11-06 11:10:44.336781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.287 Running I/O for 15 seconds... 00:28:55.170 9911.00 IOPS, 38.71 MiB/s [2024-11-06T10:10:47.537Z] 10433.50 IOPS, 40.76 MiB/s [2024-11-06T10:10:47.537Z] 11:10:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3436622 00:28:56.115 11:10:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:56.115 [2024-11-06 11:10:47.195412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.115 [2024-11-06 11:10:47.195660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.115 [2024-11-06 11:10:47.195678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.115 [2024-11-06 11:10:47.195700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.115 [2024-11-06 11:10:47.195719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.115 [2024-11-06 11:10:47.195739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.115 [2024-11-06 11:10:47.195852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.115 [2024-11-06 11:10:47.195870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.115 [2024-11-06 11:10:47.195900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.115 [2024-11-06 11:10:47.195907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.195917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.195926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.195936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.195945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.195955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.195963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.195973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.195982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.195991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.195999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.116 [2024-11-06 11:10:47.196550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.116 [2024-11-06 11:10:47.196593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.116 [2024-11-06 11:10:47.196601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.196991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.196998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.117 [2024-11-06 11:10:47.197287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.117 [2024-11-06 11:10:47.197294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.118 [2024-11-06 11:10:47.197776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.197784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04370 is same with the state(6) to be set 00:28:56.118 [2024-11-06 11:10:47.197795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:56.118 [2024-11-06 11:10:47.197801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:56.118 [2024-11-06 11:10:47.197807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94200 len:8 PRP1 0x0 PRP2 0x0 00:28:56.118 [2024-11-06 11:10:47.197815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.118 [2024-11-06 11:10:47.201404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.118 [2024-11-06 11:10:47.201459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.118 [2024-11-06 11:10:47.202237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.118 [2024-11-06 11:10:47.202256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.118 [2024-11-06 11:10:47.202264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.118 [2024-11-06 11:10:47.202486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.118 [2024-11-06 11:10:47.202706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.118 [2024-11-06 11:10:47.202716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.118 [2024-11-06 11:10:47.202724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.118 [2024-11-06 11:10:47.202732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.118 [2024-11-06 11:10:47.215507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.118 [2024-11-06 11:10:47.216144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.118 [2024-11-06 11:10:47.216185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.118 [2024-11-06 11:10:47.216196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.118 [2024-11-06 11:10:47.216437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.118 [2024-11-06 11:10:47.216662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.118 [2024-11-06 11:10:47.216672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.118 [2024-11-06 11:10:47.216681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.118 [2024-11-06 11:10:47.216689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.118 [2024-11-06 11:10:47.229459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.118 [2024-11-06 11:10:47.230108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.118 [2024-11-06 11:10:47.230148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.230159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.230398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.230622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.230632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.230641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.230649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.243434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.244075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.244114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.244126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.244365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.244590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.244600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.244607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.244615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.257385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.258096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.258135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.258146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.258386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.258610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.258620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.258628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.258636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.271195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.271830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.271870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.271882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.272124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.272348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.272357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.272365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.272374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.285143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.285811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.285851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.285863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.286109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.286334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.286346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.286354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.286363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.299135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.299772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.299811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.299822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.300061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.300285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.300296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.300304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.300312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.313085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.313712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.313758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.313770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.314009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.314233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.314245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.314252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.314260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.327020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.327679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.327718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.327729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.327978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.328203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.328219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.119 [2024-11-06 11:10:47.328227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.119 [2024-11-06 11:10:47.328235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.119 [2024-11-06 11:10:47.341015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.119 [2024-11-06 11:10:47.341566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.119 [2024-11-06 11:10:47.341585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.119 [2024-11-06 11:10:47.341594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.119 [2024-11-06 11:10:47.341821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.119 [2024-11-06 11:10:47.342043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.119 [2024-11-06 11:10:47.342061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.342069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.342077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.354841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.355372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.355390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.355398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.355617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.355844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.355855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.355862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.355870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.368818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.369376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.369393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.369401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.369619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.369846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.369856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.369864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.369874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.382613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.383190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.383207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.383215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.383433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.383653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.383663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.383670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.383677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.396433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.397043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.397082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.397093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.397332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.397557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.397567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.397575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.397583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.410344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.411006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.411045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.411057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.411295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.411519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.411529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.411537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.411545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.424304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.424965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.425004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.425015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.425253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.425478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.425488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.425496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.425504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.438305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.438860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.438900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.438912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.439154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.439379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.439389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.439397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.439405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.452194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.452863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.452902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.452915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.453157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.453381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.453392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.453400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.453408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.466176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.466759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.466779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.466787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.467012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.467235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.120 [2024-11-06 11:10:47.467246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.120 [2024-11-06 11:10:47.467253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.120 [2024-11-06 11:10:47.467260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.120 [2024-11-06 11:10:47.480026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.120 [2024-11-06 11:10:47.480666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.120 [2024-11-06 11:10:47.480706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.120 [2024-11-06 11:10:47.480717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.120 [2024-11-06 11:10:47.480963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.120 [2024-11-06 11:10:47.481189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.121 [2024-11-06 11:10:47.481200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.121 [2024-11-06 11:10:47.481209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.121 [2024-11-06 11:10:47.481217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.121 [2024-11-06 11:10:47.493990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.121 [2024-11-06 11:10:47.494557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.121 [2024-11-06 11:10:47.494577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.121 [2024-11-06 11:10:47.494584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.121 [2024-11-06 11:10:47.494813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.121 [2024-11-06 11:10:47.495034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.121 [2024-11-06 11:10:47.495044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.121 [2024-11-06 11:10:47.495051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.121 [2024-11-06 11:10:47.495059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.121 [2024-11-06 11:10:47.507820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.121 [2024-11-06 11:10:47.508342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.121 [2024-11-06 11:10:47.508360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.121 [2024-11-06 11:10:47.508368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.121 [2024-11-06 11:10:47.508596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.121 [2024-11-06 11:10:47.508822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.121 [2024-11-06 11:10:47.508838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.121 [2024-11-06 11:10:47.508845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.121 [2024-11-06 11:10:47.508853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.121 [2024-11-06 11:10:47.521624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.121 [2024-11-06 11:10:47.522257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.121 [2024-11-06 11:10:47.522296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.121 [2024-11-06 11:10:47.522307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.121 [2024-11-06 11:10:47.522546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.121 [2024-11-06 11:10:47.522781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.121 [2024-11-06 11:10:47.522792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.121 [2024-11-06 11:10:47.522799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.121 [2024-11-06 11:10:47.522807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 [2024-11-06 11:10:47.535602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.536283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.536323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.536334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.383 [2024-11-06 11:10:47.536573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.383 [2024-11-06 11:10:47.536805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.383 [2024-11-06 11:10:47.536816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.383 [2024-11-06 11:10:47.536824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.383 [2024-11-06 11:10:47.536833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 9349.00 IOPS, 36.52 MiB/s [2024-11-06T10:10:47.805Z] [2024-11-06 11:10:47.549454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.550030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.550070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.550081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.383 [2024-11-06 11:10:47.550320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.383 [2024-11-06 11:10:47.550544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.383 [2024-11-06 11:10:47.550554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.383 [2024-11-06 11:10:47.550563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.383 [2024-11-06 11:10:47.550576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 [2024-11-06 11:10:47.563354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.564040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.564080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.564092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.383 [2024-11-06 11:10:47.564331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.383 [2024-11-06 11:10:47.564556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.383 [2024-11-06 11:10:47.564566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.383 [2024-11-06 11:10:47.564574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.383 [2024-11-06 11:10:47.564583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 [2024-11-06 11:10:47.577355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.578530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.578562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.578574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.383 [2024-11-06 11:10:47.578821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.383 [2024-11-06 11:10:47.579046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.383 [2024-11-06 11:10:47.579057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.383 [2024-11-06 11:10:47.579065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.383 [2024-11-06 11:10:47.579074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 [2024-11-06 11:10:47.591215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.591881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.591920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.591931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.383 [2024-11-06 11:10:47.592171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.383 [2024-11-06 11:10:47.592395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.383 [2024-11-06 11:10:47.592405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.383 [2024-11-06 11:10:47.592413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.383 [2024-11-06 11:10:47.592422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 [2024-11-06 11:10:47.605216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.605852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.605892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.605905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.383 [2024-11-06 11:10:47.606147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.383 [2024-11-06 11:10:47.606372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.383 [2024-11-06 11:10:47.606383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.383 [2024-11-06 11:10:47.606391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.383 [2024-11-06 11:10:47.606399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 [2024-11-06 11:10:47.619168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.619805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.619845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.619857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.383 [2024-11-06 11:10:47.620098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.383 [2024-11-06 11:10:47.620322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.383 [2024-11-06 11:10:47.620333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.383 [2024-11-06 11:10:47.620340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.383 [2024-11-06 11:10:47.620348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.383 [2024-11-06 11:10:47.633116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.383 [2024-11-06 11:10:47.633663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.383 [2024-11-06 11:10:47.633683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.383 [2024-11-06 11:10:47.633691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.633917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.634137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.634148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.634155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.634162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.646970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.647490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.647508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.647520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.647740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.647968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.647977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.647985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.647992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.660968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.661533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.661551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.661559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.661784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.662004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.662014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.662021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.662028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.674783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.675439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.675478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.675489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.675729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.675960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.675973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.675981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.675989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.688754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.689332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.689352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.689360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.689580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.689818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.689829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.689836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.689843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.702609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.703161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.703181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.703189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.703409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.703628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.703639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.703646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.703653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.716426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.717053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.717092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.717105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.717345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.717569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.717579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.717587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.717595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.730374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.731018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.731058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.731070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.731308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.731533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.731543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.731551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.731563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.744349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.745003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.745043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.745054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.745292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.745517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.745528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.745536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.745544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.758339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.758881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.758902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.758910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.384 [2024-11-06 11:10:47.759131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.384 [2024-11-06 11:10:47.759351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.384 [2024-11-06 11:10:47.759361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.384 [2024-11-06 11:10:47.759368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.384 [2024-11-06 11:10:47.759375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.384 [2024-11-06 11:10:47.772134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.384 [2024-11-06 11:10:47.772698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.384 [2024-11-06 11:10:47.772715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.384 [2024-11-06 11:10:47.772722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.385 [2024-11-06 11:10:47.772947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.385 [2024-11-06 11:10:47.773167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.385 [2024-11-06 11:10:47.773177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.385 [2024-11-06 11:10:47.773184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.385 [2024-11-06 11:10:47.773192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.385 [2024-11-06 11:10:47.785953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.385 [2024-11-06 11:10:47.786488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.385 [2024-11-06 11:10:47.786505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.385 [2024-11-06 11:10:47.786512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.385 [2024-11-06 11:10:47.786731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.385 [2024-11-06 11:10:47.786958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.385 [2024-11-06 11:10:47.786968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.385 [2024-11-06 11:10:47.786975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.385 [2024-11-06 11:10:47.786984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.385 [2024-11-06 11:10:47.799748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.385 [2024-11-06 11:10:47.800310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.385 [2024-11-06 11:10:47.800328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.385 [2024-11-06 11:10:47.800336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.385 [2024-11-06 11:10:47.800555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.385 [2024-11-06 11:10:47.800780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.385 [2024-11-06 11:10:47.800792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.385 [2024-11-06 11:10:47.800799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.385 [2024-11-06 11:10:47.800806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.813555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.814240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.814280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.814291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.814530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.814762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.814774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.814782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.814790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.827556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.828278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.828317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.828334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.828573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.828807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.828819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.828827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.828835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.841404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.842067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.842107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.842118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.842358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.842582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.842593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.842601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.842610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.855401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.856096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.856136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.856147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.856386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.856610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.856620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.856628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.856637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.869209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.869767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.869787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.869795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.870015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.870240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.870251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.870259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.870265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.883025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.883653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.883693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.883705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.883953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.884178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.884188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.884196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.884204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.896963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.897510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.897549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.897560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.897806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.898031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.898042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.898050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.898058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.910822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.911496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.647 [2024-11-06 11:10:47.911535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.647 [2024-11-06 11:10:47.911546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.647 [2024-11-06 11:10:47.911793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.647 [2024-11-06 11:10:47.912018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.647 [2024-11-06 11:10:47.912028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.647 [2024-11-06 11:10:47.912036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.647 [2024-11-06 11:10:47.912049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.647 [2024-11-06 11:10:47.924814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.647 [2024-11-06 11:10:47.925362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:47.925381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:47.925390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:47.925609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:47.925835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:47.925845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:47.925853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:47.925860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:47.938623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:47.939166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:47.939184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:47.939192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:47.939411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:47.939630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:47.939641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:47.939648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:47.939654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:47.952421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:47.953108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:47.953148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:47.953159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:47.953398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:47.953621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:47.953633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:47.953641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:47.953649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:47.966219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:47.966886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:47.966925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:47.966938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:47.967178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:47.967402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:47.967413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:47.967422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:47.967430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:47.980201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:47.980832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:47.980871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:47.980883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:47.981124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:47.981347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:47.981358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:47.981365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:47.981374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:47.994174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:47.994762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:47.994800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:47.994813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:47.995053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:47.995277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:47.995287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:47.995295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:47.995303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:48.008072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:48.008706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:48.008753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:48.008770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:48.009012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:48.009237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:48.009246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:48.009254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:48.009262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:48.022027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:48.022710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:48.022757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:48.022770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:48.023010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:48.023234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:48.023244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:48.023252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:48.023260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:48.036033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:48.036497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:48.036517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:48.036525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:48.036744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:48.036971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:48.036982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:48.036989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:48.036996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.648 [2024-11-06 11:10:48.049977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.648 [2024-11-06 11:10:48.050548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.648 [2024-11-06 11:10:48.050565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.648 [2024-11-06 11:10:48.050573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.648 [2024-11-06 11:10:48.050797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.648 [2024-11-06 11:10:48.051017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.648 [2024-11-06 11:10:48.051031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.648 [2024-11-06 11:10:48.051039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.648 [2024-11-06 11:10:48.051045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.649 [2024-11-06 11:10:48.063824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.649 [2024-11-06 11:10:48.064357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.649 [2024-11-06 11:10:48.064396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.649 [2024-11-06 11:10:48.064409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.649 [2024-11-06 11:10:48.064649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.649 [2024-11-06 11:10:48.064881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.649 [2024-11-06 11:10:48.064892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.649 [2024-11-06 11:10:48.064900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.649 [2024-11-06 11:10:48.064908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.914 [2024-11-06 11:10:48.077668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.915 [2024-11-06 11:10:48.078259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.915 [2024-11-06 11:10:48.078279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.915 [2024-11-06 11:10:48.078287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.915 [2024-11-06 11:10:48.078507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.915 [2024-11-06 11:10:48.078726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.915 [2024-11-06 11:10:48.078736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.915 [2024-11-06 11:10:48.078744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.915 [2024-11-06 11:10:48.078757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.915 [2024-11-06 11:10:48.091511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.915 [2024-11-06 11:10:48.092103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.915 [2024-11-06 11:10:48.092122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.915 [2024-11-06 11:10:48.092130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.915 [2024-11-06 11:10:48.092349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.915 [2024-11-06 11:10:48.092568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.915 [2024-11-06 11:10:48.092577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.915 [2024-11-06 11:10:48.092584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.915 [2024-11-06 11:10:48.092596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.915 [2024-11-06 11:10:48.105352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.915 [2024-11-06 11:10:48.105986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.915 [2024-11-06 11:10:48.106025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.915 [2024-11-06 11:10:48.106036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.915 [2024-11-06 11:10:48.106275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.915 [2024-11-06 11:10:48.106499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.915 [2024-11-06 11:10:48.106509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.915 [2024-11-06 11:10:48.106518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.915 [2024-11-06 11:10:48.106526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.915 [2024-11-06 11:10:48.119385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.915 [2024-11-06 11:10:48.120076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.915 [2024-11-06 11:10:48.120115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.915 [2024-11-06 11:10:48.120127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.915 [2024-11-06 11:10:48.120366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.915 [2024-11-06 11:10:48.120590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.915 [2024-11-06 11:10:48.120601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.915 [2024-11-06 11:10:48.120608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.915 [2024-11-06 11:10:48.120617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.915 [2024-11-06 11:10:48.133187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.915 [2024-11-06 11:10:48.133881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.915 [2024-11-06 11:10:48.133921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.915 [2024-11-06 11:10:48.133934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.915 [2024-11-06 11:10:48.134176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.915 [2024-11-06 11:10:48.134400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.915 [2024-11-06 11:10:48.134411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.915 [2024-11-06 11:10:48.134419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.915 [2024-11-06 11:10:48.134427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.915 [2024-11-06 11:10:48.147000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.915 [2024-11-06 11:10:48.147552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.915 [2024-11-06 11:10:48.147572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.915 [2024-11-06 11:10:48.147580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.915 [2024-11-06 11:10:48.147806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.915 [2024-11-06 11:10:48.148027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.915 [2024-11-06 11:10:48.148037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.915 [2024-11-06 11:10:48.148044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.916 [2024-11-06 11:10:48.148052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.916 [2024-11-06 11:10:48.160825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.916 [2024-11-06 11:10:48.161511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.916 [2024-11-06 11:10:48.161550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.916 [2024-11-06 11:10:48.161561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.916 [2024-11-06 11:10:48.161807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.916 [2024-11-06 11:10:48.162032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.916 [2024-11-06 11:10:48.162045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.916 [2024-11-06 11:10:48.162053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.916 [2024-11-06 11:10:48.162061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.916 [2024-11-06 11:10:48.174628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.916 [2024-11-06 11:10:48.175263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.916 [2024-11-06 11:10:48.175302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.916 [2024-11-06 11:10:48.175313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.916 [2024-11-06 11:10:48.175552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.916 [2024-11-06 11:10:48.175784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.916 [2024-11-06 11:10:48.175795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.916 [2024-11-06 11:10:48.175804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.916 [2024-11-06 11:10:48.175812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.916 [2024-11-06 11:10:48.188584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.916 [2024-11-06 11:10:48.189237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.916 [2024-11-06 11:10:48.189277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.916 [2024-11-06 11:10:48.189292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.916 [2024-11-06 11:10:48.189531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.916 [2024-11-06 11:10:48.189764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.916 [2024-11-06 11:10:48.189774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.916 [2024-11-06 11:10:48.189782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.916 [2024-11-06 11:10:48.189790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.916 [2024-11-06 11:10:48.202555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.916 [2024-11-06 11:10:48.203347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.916 [2024-11-06 11:10:48.203387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.916 [2024-11-06 11:10:48.203398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.916 [2024-11-06 11:10:48.203637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.916 [2024-11-06 11:10:48.203870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.916 [2024-11-06 11:10:48.203881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.916 [2024-11-06 11:10:48.203889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.916 [2024-11-06 11:10:48.203897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.916 [2024-11-06 11:10:48.216449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.916 [2024-11-06 11:10:48.216998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.916 [2024-11-06 11:10:48.217018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.916 [2024-11-06 11:10:48.217027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.916 [2024-11-06 11:10:48.217247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.916 [2024-11-06 11:10:48.217467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.916 [2024-11-06 11:10:48.217476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.916 [2024-11-06 11:10:48.217483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.916 [2024-11-06 11:10:48.217490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.916 [2024-11-06 11:10:48.230359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.916 [2024-11-06 11:10:48.230907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.916 [2024-11-06 11:10:48.230947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.916 [2024-11-06 11:10:48.230959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.916 [2024-11-06 11:10:48.231200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.916 [2024-11-06 11:10:48.231424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.916 [2024-11-06 11:10:48.231439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.916 [2024-11-06 11:10:48.231447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.916 [2024-11-06 11:10:48.231455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.916 [2024-11-06 11:10:48.244240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.917 [2024-11-06 11:10:48.244867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.917 [2024-11-06 11:10:48.244907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.917 [2024-11-06 11:10:48.244920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.917 [2024-11-06 11:10:48.245160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.917 [2024-11-06 11:10:48.245384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.917 [2024-11-06 11:10:48.245395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.917 [2024-11-06 11:10:48.245402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.917 [2024-11-06 11:10:48.245410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.917 [2024-11-06 11:10:48.258196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.917 [2024-11-06 11:10:48.258770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.917 [2024-11-06 11:10:48.258791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.917 [2024-11-06 11:10:48.258800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.917 [2024-11-06 11:10:48.259020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.917 [2024-11-06 11:10:48.259239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.917 [2024-11-06 11:10:48.259248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.917 [2024-11-06 11:10:48.259256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.917 [2024-11-06 11:10:48.259263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.917 [2024-11-06 11:10:48.272027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.917 [2024-11-06 11:10:48.272595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.917 [2024-11-06 11:10:48.272613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.917 [2024-11-06 11:10:48.272621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.917 [2024-11-06 11:10:48.272845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.917 [2024-11-06 11:10:48.273065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.917 [2024-11-06 11:10:48.273075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.917 [2024-11-06 11:10:48.273082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.917 [2024-11-06 11:10:48.273094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.917 [2024-11-06 11:10:48.285846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.917 [2024-11-06 11:10:48.286503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.917 [2024-11-06 11:10:48.286543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.917 [2024-11-06 11:10:48.286554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.917 [2024-11-06 11:10:48.286800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.917 [2024-11-06 11:10:48.287025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.917 [2024-11-06 11:10:48.287036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.917 [2024-11-06 11:10:48.287044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.917 [2024-11-06 11:10:48.287052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.917 [2024-11-06 11:10:48.299815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.917 [2024-11-06 11:10:48.300488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.917 [2024-11-06 11:10:48.300527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.917 [2024-11-06 11:10:48.300538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.917 [2024-11-06 11:10:48.300786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.917 [2024-11-06 11:10:48.301011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.917 [2024-11-06 11:10:48.301021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.917 [2024-11-06 11:10:48.301029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.917 [2024-11-06 11:10:48.301038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.917 [2024-11-06 11:10:48.313803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.917 [2024-11-06 11:10:48.314432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.917 [2024-11-06 11:10:48.314471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.917 [2024-11-06 11:10:48.314482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.917 [2024-11-06 11:10:48.314721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.917 [2024-11-06 11:10:48.314954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.917 [2024-11-06 11:10:48.314965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.917 [2024-11-06 11:10:48.314973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.917 [2024-11-06 11:10:48.314983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.918 [2024-11-06 11:10:48.327744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.918 [2024-11-06 11:10:48.328401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.918 [2024-11-06 11:10:48.328440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:56.918 [2024-11-06 11:10:48.328451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:56.918 [2024-11-06 11:10:48.328690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:56.918 [2024-11-06 11:10:48.328924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.918 [2024-11-06 11:10:48.328935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.918 [2024-11-06 11:10:48.328943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.918 [2024-11-06 11:10:48.328951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.182 [2024-11-06 11:10:48.341719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.182 [2024-11-06 11:10:48.342384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.182 [2024-11-06 11:10:48.342423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.182 [2024-11-06 11:10:48.342434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.182 [2024-11-06 11:10:48.342673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.182 [2024-11-06 11:10:48.342907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.182 [2024-11-06 11:10:48.342919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.182 [2024-11-06 11:10:48.342927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.182 [2024-11-06 11:10:48.342935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.182 [2024-11-06 11:10:48.355704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.182 [2024-11-06 11:10:48.356363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.182 [2024-11-06 11:10:48.356403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.182 [2024-11-06 11:10:48.356415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.182 [2024-11-06 11:10:48.356654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.182 [2024-11-06 11:10:48.356888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.182 [2024-11-06 11:10:48.356900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.182 [2024-11-06 11:10:48.356908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.182 [2024-11-06 11:10:48.356916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.182 [2024-11-06 11:10:48.369676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.182 [2024-11-06 11:10:48.370352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.182 [2024-11-06 11:10:48.370391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.182 [2024-11-06 11:10:48.370411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.182 [2024-11-06 11:10:48.370650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.182 [2024-11-06 11:10:48.370901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.182 [2024-11-06 11:10:48.370913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.182 [2024-11-06 11:10:48.370921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.182 [2024-11-06 11:10:48.370930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.182 [2024-11-06 11:10:48.383481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.182 [2024-11-06 11:10:48.384056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.182 [2024-11-06 11:10:48.384078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.182 [2024-11-06 11:10:48.384086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.182 [2024-11-06 11:10:48.384306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.182 [2024-11-06 11:10:48.384526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.182 [2024-11-06 11:10:48.384536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.182 [2024-11-06 11:10:48.384543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.182 [2024-11-06 11:10:48.384550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.182 [2024-11-06 11:10:48.397306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.182 [2024-11-06 11:10:48.397846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.182 [2024-11-06 11:10:48.397863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.182 [2024-11-06 11:10:48.397871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.182 [2024-11-06 11:10:48.398090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.182 [2024-11-06 11:10:48.398310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.182 [2024-11-06 11:10:48.398319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.182 [2024-11-06 11:10:48.398326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.182 [2024-11-06 11:10:48.398333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.182 [2024-11-06 11:10:48.411292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.182 [2024-11-06 11:10:48.411880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.182 [2024-11-06 11:10:48.411919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.411930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.412168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.412393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.412407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.412415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.412423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.425193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.425848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.425888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.425900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.426140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.426363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.426374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.426382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.426390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.439168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.439843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.439882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.439893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.440132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.440356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.440366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.440374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.440383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.453165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.453863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.453903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.453915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.454155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.454379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.454390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.454398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.454410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.466977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.467527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.467566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.467577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.467824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.468049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.468060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.468068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.468076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.480845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.481467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.481507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.481518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.481767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.481991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.482002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.482010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.482017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.494776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.495407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.495445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.495457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.495695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.495929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.495940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.495948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.495956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.508715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.509397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.509437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.509448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.509687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.509920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.509932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.509939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.509948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.522704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.523287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.523307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.523316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.523535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.523763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.523773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.523780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.523787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 [2024-11-06 11:10:48.536553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.183 [2024-11-06 11:10:48.537101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.183 [2024-11-06 11:10:48.537119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.183 [2024-11-06 11:10:48.537127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.183 [2024-11-06 11:10:48.537346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.183 [2024-11-06 11:10:48.537571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.183 [2024-11-06 11:10:48.537581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.183 [2024-11-06 11:10:48.537589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.183 [2024-11-06 11:10:48.537595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.183 7011.75 IOPS, 27.39 MiB/s [2024-11-06T10:10:48.606Z] [2024-11-06 11:10:48.550400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.184 [2024-11-06 11:10:48.550865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.184 [2024-11-06 11:10:48.550905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.184 [2024-11-06 11:10:48.550922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.184 [2024-11-06 11:10:48.551163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.184 [2024-11-06 11:10:48.551387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.184 [2024-11-06 11:10:48.551398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.184 [2024-11-06 11:10:48.551406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.184 [2024-11-06 11:10:48.551415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.184 [2024-11-06 11:10:48.564197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.184 [2024-11-06 11:10:48.564843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.184 [2024-11-06 11:10:48.564882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.184 [2024-11-06 11:10:48.564893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.184 [2024-11-06 11:10:48.565132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.184 [2024-11-06 11:10:48.565355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.184 [2024-11-06 11:10:48.565366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.184 [2024-11-06 11:10:48.565374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.184 [2024-11-06 11:10:48.565382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.184 [2024-11-06 11:10:48.578150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.184 [2024-11-06 11:10:48.578776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.184 [2024-11-06 11:10:48.578824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.184 [2024-11-06 11:10:48.578837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.184 [2024-11-06 11:10:48.579076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.184 [2024-11-06 11:10:48.579300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.184 [2024-11-06 11:10:48.579309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.184 [2024-11-06 11:10:48.579317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.184 [2024-11-06 11:10:48.579325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.184 [2024-11-06 11:10:48.592092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.184 [2024-11-06 11:10:48.592767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.184 [2024-11-06 11:10:48.592806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.184 [2024-11-06 11:10:48.592818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.184 [2024-11-06 11:10:48.593060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.184 [2024-11-06 11:10:48.593288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.184 [2024-11-06 11:10:48.593298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.184 [2024-11-06 11:10:48.593306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.184 [2024-11-06 11:10:48.593314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.606091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.606668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.606688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.606696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.606922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.607143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.607153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.446 [2024-11-06 11:10:48.607160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.446 [2024-11-06 11:10:48.607167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.619920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.620545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.620584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.620597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.620845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.621070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.621081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.446 [2024-11-06 11:10:48.621089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.446 [2024-11-06 11:10:48.621097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.633903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.634567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.634606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.634618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.634866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.635091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.635101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.446 [2024-11-06 11:10:48.635114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.446 [2024-11-06 11:10:48.635122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.647898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.648521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.648560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.648571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.648818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.649043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.649054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.446 [2024-11-06 11:10:48.649062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.446 [2024-11-06 11:10:48.649071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.661848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.662490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.662529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.662540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.662789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.663014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.663024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.446 [2024-11-06 11:10:48.663032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.446 [2024-11-06 11:10:48.663040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.675805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.676380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.676400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.676408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.676628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.676856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.676865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.446 [2024-11-06 11:10:48.676873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.446 [2024-11-06 11:10:48.676880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.689644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.690223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.690241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.690249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.690467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.690687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.690698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.446 [2024-11-06 11:10:48.690705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.446 [2024-11-06 11:10:48.690712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.446 [2024-11-06 11:10:48.703469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.446 [2024-11-06 11:10:48.704100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 11:10:48.704140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.446 [2024-11-06 11:10:48.704152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.446 [2024-11-06 11:10:48.704391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.446 [2024-11-06 11:10:48.704615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.446 [2024-11-06 11:10:48.704626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.704634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.704642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.717415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.718063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.718103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.718114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.718353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.718578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.718588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.718596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.718605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.731370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.732020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.732060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.732075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.732314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.732539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.732549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.732557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.732565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.745340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.746019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.746059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.746070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.746308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.746533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.746544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.746552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.746560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.759134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.759706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.759726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.759734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.759961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.760181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.760191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.760199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.760207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.772956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.773521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.773539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.773547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.773772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.773997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.774007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.774014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.774021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.786779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.787352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.787369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.787377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.787596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.787822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.787832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.787840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.787846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.800596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.801122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.801139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.801147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.801366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.801585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.801594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.801601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.801608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.814563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.815233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.815273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.815284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.815522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.815757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.815768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.815776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.815788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.828405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.829049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.829088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.829099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.829338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.829562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.829573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.447 [2024-11-06 11:10:48.829581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.447 [2024-11-06 11:10:48.829589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.447 [2024-11-06 11:10:48.842369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.447 [2024-11-06 11:10:48.843046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 11:10:48.843085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.447 [2024-11-06 11:10:48.843097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.447 [2024-11-06 11:10:48.843336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.447 [2024-11-06 11:10:48.843559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.447 [2024-11-06 11:10:48.843569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.448 [2024-11-06 11:10:48.843577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.448 [2024-11-06 11:10:48.843586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.448 [2024-11-06 11:10:48.856367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.448 [2024-11-06 11:10:48.857048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 11:10:48.857087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.448 [2024-11-06 11:10:48.857100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.448 [2024-11-06 11:10:48.857340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.448 [2024-11-06 11:10:48.857564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.448 [2024-11-06 11:10:48.857574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.448 [2024-11-06 11:10:48.857582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.448 [2024-11-06 11:10:48.857590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.709 [2024-11-06 11:10:48.870361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.709 [2024-11-06 11:10:48.870919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.709 [2024-11-06 11:10:48.870940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.709 [2024-11-06 11:10:48.870948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.709 [2024-11-06 11:10:48.871167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.709 [2024-11-06 11:10:48.871388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.709 [2024-11-06 11:10:48.871397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.709 [2024-11-06 11:10:48.871404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.871411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.884169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.884578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.884598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.884605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.884831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.885051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.885062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.885069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.885076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.898042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.898608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.898626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.898634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.898858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.899079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.899088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.899096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.899103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.911854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.912382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.912422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.912439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.912680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.912914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.912925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.912934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.912943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.925702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.926380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.926419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.926430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.926670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.926902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.926913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.926921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.926929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.939703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.940408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.940448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.940459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.940698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.940931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.940943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.940951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.940959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.953510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.954155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.954195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.954206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.954444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.954683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.954696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.954704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.954712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.967483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.968117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.968157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.968168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.968406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.968631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.968640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.968648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.968657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.981431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.982072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.982111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.982122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.982361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.982585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.982595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.982603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.982611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:48.995386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:48.995963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:48.995983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:48.995991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:48.996211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:48.996432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:48.996441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:48.996448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.710 [2024-11-06 11:10:48.996460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.710 [2024-11-06 11:10:49.009226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.710 [2024-11-06 11:10:49.009839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.710 [2024-11-06 11:10:49.009879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.710 [2024-11-06 11:10:49.009891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.710 [2024-11-06 11:10:49.010132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.710 [2024-11-06 11:10:49.010356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.710 [2024-11-06 11:10:49.010366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.710 [2024-11-06 11:10:49.010374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.010383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.023155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.023794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.023834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.023846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.024086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.024310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.024321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.024329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.024337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.037108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.037803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.037843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.037856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.038096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.038321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.038331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.038339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.038347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.050903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.051580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.051620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.051631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.051882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.052106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.052117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.052125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.052133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.064902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.065569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.065609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.065620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.065869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.066094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.066105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.066113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.066121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.078909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.079590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.079630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.079641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.079891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.080116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.080126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.080134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.080142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.092901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.093574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.093613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.093628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.093878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.094103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.094113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.094121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.094130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.106893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.107428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.107448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.107456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.107676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.107903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.107914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.107922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.107929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.711 [2024-11-06 11:10:49.120882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.711 [2024-11-06 11:10:49.121280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.711 [2024-11-06 11:10:49.121298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.711 [2024-11-06 11:10:49.121305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.711 [2024-11-06 11:10:49.121524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.711 [2024-11-06 11:10:49.121744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.711 [2024-11-06 11:10:49.121762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.711 [2024-11-06 11:10:49.121770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.711 [2024-11-06 11:10:49.121778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.973 [2024-11-06 11:10:49.134744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.973 [2024-11-06 11:10:49.135264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.973 [2024-11-06 11:10:49.135281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.973 [2024-11-06 11:10:49.135289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.973 [2024-11-06 11:10:49.135508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.973 [2024-11-06 11:10:49.135732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.973 [2024-11-06 11:10:49.135742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.973 [2024-11-06 11:10:49.135764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.973 [2024-11-06 11:10:49.135772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.973 [2024-11-06 11:10:49.148723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.973 [2024-11-06 11:10:49.149287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.973 [2024-11-06 11:10:49.149305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.973 [2024-11-06 11:10:49.149312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.973 [2024-11-06 11:10:49.149531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.973 [2024-11-06 11:10:49.149757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.973 [2024-11-06 11:10:49.149767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.973 [2024-11-06 11:10:49.149774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.149780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.162529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.163149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.163189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.163200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.163438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.163663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.974 [2024-11-06 11:10:49.163673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.974 [2024-11-06 11:10:49.163682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.163690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.176454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.177106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.177145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.177156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.177395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.177620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.974 [2024-11-06 11:10:49.177630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.974 [2024-11-06 11:10:49.177638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.177651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.190419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.191101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.191140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.191151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.191390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.191615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.974 [2024-11-06 11:10:49.191625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.974 [2024-11-06 11:10:49.191633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.191641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.204424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.205118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.205159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.205172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.205411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.205637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.974 [2024-11-06 11:10:49.205648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.974 [2024-11-06 11:10:49.205657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.205666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.218240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.218859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.218898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.218911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.219153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.219376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.974 [2024-11-06 11:10:49.219386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.974 [2024-11-06 11:10:49.219395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.219404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.232175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.232762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.232783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.232791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.233010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.233230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.974 [2024-11-06 11:10:49.233240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.974 [2024-11-06 11:10:49.233247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.233254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.246016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.246679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.246719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.246731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.246981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.247206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.974 [2024-11-06 11:10:49.247216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.974 [2024-11-06 11:10:49.247224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.974 [2024-11-06 11:10:49.247232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.974 [2024-11-06 11:10:49.260083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.974 [2024-11-06 11:10:49.260672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.974 [2024-11-06 11:10:49.260693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.974 [2024-11-06 11:10:49.260701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.974 [2024-11-06 11:10:49.260928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.974 [2024-11-06 11:10:49.261149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.261158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.261165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.261172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.273929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.274587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.274627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.274642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.975 [2024-11-06 11:10:49.274891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.975 [2024-11-06 11:10:49.275117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.275127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.275135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.275143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.287922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.288593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.288632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.288643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.975 [2024-11-06 11:10:49.288892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.975 [2024-11-06 11:10:49.289117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.289127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.289135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.289143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.301927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.302506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.302526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.302534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.975 [2024-11-06 11:10:49.302762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.975 [2024-11-06 11:10:49.302983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.302993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.303001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.303008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.315793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.316364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.316383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.316390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.975 [2024-11-06 11:10:49.316610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.975 [2024-11-06 11:10:49.316841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.316851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.316858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.316865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.329629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.330201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.330218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.330226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.975 [2024-11-06 11:10:49.330445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.975 [2024-11-06 11:10:49.330666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.330676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.330684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.330691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.343474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.344040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.344058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.344065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.975 [2024-11-06 11:10:49.344284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.975 [2024-11-06 11:10:49.344504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.344515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.344522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.344529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.357306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.357968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.358006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.358017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.975 [2024-11-06 11:10:49.358257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.975 [2024-11-06 11:10:49.358480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.975 [2024-11-06 11:10:49.358490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.975 [2024-11-06 11:10:49.358498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.975 [2024-11-06 11:10:49.358511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.975 [2024-11-06 11:10:49.371293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.975 [2024-11-06 11:10:49.371975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.975 [2024-11-06 11:10:49.372014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.975 [2024-11-06 11:10:49.372026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.976 [2024-11-06 11:10:49.372265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.976 [2024-11-06 11:10:49.372489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.976 [2024-11-06 11:10:49.372499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.976 [2024-11-06 11:10:49.372507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.976 [2024-11-06 11:10:49.372515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:57.976 [2024-11-06 11:10:49.385099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:57.976 [2024-11-06 11:10:49.385669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.976 [2024-11-06 11:10:49.385689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:57.976 [2024-11-06 11:10:49.385697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:57.976 [2024-11-06 11:10:49.385924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:57.976 [2024-11-06 11:10:49.386145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:57.976 [2024-11-06 11:10:49.386155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:57.976 [2024-11-06 11:10:49.386162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:57.976 [2024-11-06 11:10:49.386169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.237 [2024-11-06 11:10:49.399008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.237 [2024-11-06 11:10:49.399627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-11-06 11:10:49.399666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.237 [2024-11-06 11:10:49.399677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.237 [2024-11-06 11:10:49.399938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.237 [2024-11-06 11:10:49.400163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.237 [2024-11-06 11:10:49.400173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.237 [2024-11-06 11:10:49.400181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.237 [2024-11-06 11:10:49.400190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.237 [2024-11-06 11:10:49.412976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.237 [2024-11-06 11:10:49.413567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.237 [2024-11-06 11:10:49.413587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.237 [2024-11-06 11:10:49.413595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.237 [2024-11-06 11:10:49.413822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.237 [2024-11-06 11:10:49.414044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.414054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.414061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.414069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.426843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.427415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.427433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.427440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.427659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.427886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.427897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.427904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.427911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.440689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.441225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.441243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.441251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.441469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.441689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.441698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.441705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.441711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.454489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.455021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.455039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.455051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.455270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.455490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.455498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.455506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.455514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.468303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.468759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.468778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.468786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.469006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.469226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.469236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.469243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.469250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.482229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.482755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.482773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.482780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.482999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.483219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.483228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.483235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.483242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.496221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.496850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.496889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.496902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.497143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.497372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.497383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.497392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.497401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.510176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.510836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.510876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.510887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.511126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.511350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.511360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.511368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.511376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.524198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.524864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.524903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.524916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.525157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.525383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.525393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.525401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.525409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 [2024-11-06 11:10:49.538203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.538798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.538826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.538835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.539060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.238 [2024-11-06 11:10:49.539281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.238 [2024-11-06 11:10:49.539292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.238 [2024-11-06 11:10:49.539299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.238 [2024-11-06 11:10:49.539310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.238 5609.40 IOPS, 21.91 MiB/s [2024-11-06T10:10:49.660Z] [2024-11-06 11:10:49.552133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.238 [2024-11-06 11:10:49.552820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.238 [2024-11-06 11:10:49.552860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.238 [2024-11-06 11:10:49.552873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.238 [2024-11-06 11:10:49.553115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.553339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.553350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.553359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.553367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.239 [2024-11-06 11:10:49.565940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.239 [2024-11-06 11:10:49.566519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-11-06 11:10:49.566539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.239 [2024-11-06 11:10:49.566548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.239 [2024-11-06 11:10:49.566774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.566995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.567005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.567012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.567019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.239 [2024-11-06 11:10:49.579783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.239 [2024-11-06 11:10:49.580426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-11-06 11:10:49.580466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.239 [2024-11-06 11:10:49.580476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.239 [2024-11-06 11:10:49.580715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.580948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.580959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.580967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.580975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.239 [2024-11-06 11:10:49.593737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.239 [2024-11-06 11:10:49.594328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-11-06 11:10:49.594347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.239 [2024-11-06 11:10:49.594356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.239 [2024-11-06 11:10:49.594575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.594802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.594813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.594821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.594828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.239 [2024-11-06 11:10:49.607589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.239 [2024-11-06 11:10:49.608163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-11-06 11:10:49.608181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.239 [2024-11-06 11:10:49.608189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.239 [2024-11-06 11:10:49.608407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.608627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.608637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.608644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.608651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.239 [2024-11-06 11:10:49.621414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.239 [2024-11-06 11:10:49.622062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-11-06 11:10:49.622102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.239 [2024-11-06 11:10:49.622113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.239 [2024-11-06 11:10:49.622352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.622576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.622587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.622596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.622604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.239 [2024-11-06 11:10:49.635379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.239 [2024-11-06 11:10:49.636095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-11-06 11:10:49.636134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.239 [2024-11-06 11:10:49.636150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.239 [2024-11-06 11:10:49.636388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.636612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.636623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.636631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.636639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.239 [2024-11-06 11:10:49.649208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.239 [2024-11-06 11:10:49.649798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.239 [2024-11-06 11:10:49.649826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.239 [2024-11-06 11:10:49.649835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.239 [2024-11-06 11:10:49.650060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.239 [2024-11-06 11:10:49.650281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.239 [2024-11-06 11:10:49.650291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.239 [2024-11-06 11:10:49.650298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.239 [2024-11-06 11:10:49.650305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.501 [2024-11-06 11:10:49.663090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.501 [2024-11-06 11:10:49.663718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.501 [2024-11-06 11:10:49.663764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.501 [2024-11-06 11:10:49.663778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.501 [2024-11-06 11:10:49.664018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.501 [2024-11-06 11:10:49.664243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.501 [2024-11-06 11:10:49.664253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.501 [2024-11-06 11:10:49.664261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.501 [2024-11-06 11:10:49.664269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.501 [2024-11-06 11:10:49.677029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.501 [2024-11-06 11:10:49.677562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.501 [2024-11-06 11:10:49.677583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.501 [2024-11-06 11:10:49.677591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.501 [2024-11-06 11:10:49.677816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.501 [2024-11-06 11:10:49.678041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.501 [2024-11-06 11:10:49.678052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.501 [2024-11-06 11:10:49.678059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.501 [2024-11-06 11:10:49.678066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.501 [2024-11-06 11:10:49.690833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.501 [2024-11-06 11:10:49.691393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.501 [2024-11-06 11:10:49.691411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.501 [2024-11-06 11:10:49.691419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.501 [2024-11-06 11:10:49.691638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.501 [2024-11-06 11:10:49.691864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.501 [2024-11-06 11:10:49.691875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.501 [2024-11-06 11:10:49.691882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.501 [2024-11-06 11:10:49.691889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.501 [2024-11-06 11:10:49.704634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.501 [2024-11-06 11:10:49.705266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.501 [2024-11-06 11:10:49.705305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.501 [2024-11-06 11:10:49.705317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.501 [2024-11-06 11:10:49.705555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.501 [2024-11-06 11:10:49.705787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.501 [2024-11-06 11:10:49.705800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.501 [2024-11-06 11:10:49.705810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.501 [2024-11-06 11:10:49.705820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.501 [2024-11-06 11:10:49.718588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.501 [2024-11-06 11:10:49.719249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.501 [2024-11-06 11:10:49.719288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.501 [2024-11-06 11:10:49.719299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.501 [2024-11-06 11:10:49.719538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.501 [2024-11-06 11:10:49.719771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.501 [2024-11-06 11:10:49.719782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.501 [2024-11-06 11:10:49.719799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.501 [2024-11-06 11:10:49.719808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.501 [2024-11-06 11:10:49.732579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.501 [2024-11-06 11:10:49.733138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.501 [2024-11-06 11:10:49.733159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.501 [2024-11-06 11:10:49.733167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.733387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.733607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.733617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.733624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.733631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.746419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.747057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.747096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.747108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.747346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.747571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.747581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.747590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.747598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.760377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.761043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.761082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.761093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.761331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.761556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.761566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.761574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.761583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.774366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.775040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.775080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.775091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.775330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.775554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.775565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.775573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.775581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.788370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.789041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.789081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.789092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.789331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.789555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.789565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.789573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.789581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.802353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.803024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.803064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.803075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.803313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.803538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.803549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.803556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.803565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.816348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.817039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.817079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.817095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.817333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.817557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.817568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.817576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.817584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.830159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.830717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.830737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.830751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.830971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.831192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.831201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.831209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.831215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.843997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.844555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.844573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.844581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.844807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.845027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.845038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.845045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.845052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.857845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.858417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.858434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.858442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.502 [2024-11-06 11:10:49.858661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.502 [2024-11-06 11:10:49.858894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.502 [2024-11-06 11:10:49.858906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.502 [2024-11-06 11:10:49.858913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.502 [2024-11-06 11:10:49.858920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.502 [2024-11-06 11:10:49.871695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.502 [2024-11-06 11:10:49.872270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.502 [2024-11-06 11:10:49.872288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.502 [2024-11-06 11:10:49.872295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.503 [2024-11-06 11:10:49.872514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.503 [2024-11-06 11:10:49.872734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.503 [2024-11-06 11:10:49.872744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.503 [2024-11-06 11:10:49.872757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.503 [2024-11-06 11:10:49.872764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.503 [2024-11-06 11:10:49.885528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.503 [2024-11-06 11:10:49.886095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.503 [2024-11-06 11:10:49.886112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.503 [2024-11-06 11:10:49.886120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.503 [2024-11-06 11:10:49.886338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.503 [2024-11-06 11:10:49.886558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.503 [2024-11-06 11:10:49.886568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.503 [2024-11-06 11:10:49.886575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.503 [2024-11-06 11:10:49.886582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.503 [2024-11-06 11:10:49.899352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.503 [2024-11-06 11:10:49.899885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.503 [2024-11-06 11:10:49.899902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.503 [2024-11-06 11:10:49.899909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.503 [2024-11-06 11:10:49.900128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.503 [2024-11-06 11:10:49.900347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.503 [2024-11-06 11:10:49.900357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.503 [2024-11-06 11:10:49.900368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.503 [2024-11-06 11:10:49.900375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.503 [2024-11-06 11:10:49.913156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.503 [2024-11-06 11:10:49.913681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.503 [2024-11-06 11:10:49.913698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.503 [2024-11-06 11:10:49.913706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.503 [2024-11-06 11:10:49.913930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.503 [2024-11-06 11:10:49.914150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.503 [2024-11-06 11:10:49.914161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.503 [2024-11-06 11:10:49.914168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.503 [2024-11-06 11:10:49.914175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.765 [2024-11-06 11:10:49.927155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.765 [2024-11-06 11:10:49.927718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 11:10:49.927734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.765 [2024-11-06 11:10:49.927742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.765 [2024-11-06 11:10:49.927967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.765 [2024-11-06 11:10:49.928187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.765 [2024-11-06 11:10:49.928197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.765 [2024-11-06 11:10:49.928205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.765 [2024-11-06 11:10:49.928211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.765 [2024-11-06 11:10:49.940999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.765 [2024-11-06 11:10:49.941562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 11:10:49.941579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.765 [2024-11-06 11:10:49.941587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.765 [2024-11-06 11:10:49.941812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.765 [2024-11-06 11:10:49.942033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.765 [2024-11-06 11:10:49.942042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.765 [2024-11-06 11:10:49.942049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.765 [2024-11-06 11:10:49.942056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.765 [2024-11-06 11:10:49.954828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.765 [2024-11-06 11:10:49.955397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 11:10:49.955413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.765 [2024-11-06 11:10:49.955421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.765 [2024-11-06 11:10:49.955639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.765 [2024-11-06 11:10:49.955866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.765 [2024-11-06 11:10:49.955876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.765 [2024-11-06 11:10:49.955884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.765 [2024-11-06 11:10:49.955891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.765 [2024-11-06 11:10:49.968669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.765 [2024-11-06 11:10:49.969336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 11:10:49.969376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.765 [2024-11-06 11:10:49.969388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.765 [2024-11-06 11:10:49.969626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.765 [2024-11-06 11:10:49.969860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.765 [2024-11-06 11:10:49.969872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.765 [2024-11-06 11:10:49.969880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.765 [2024-11-06 11:10:49.969888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.765 [2024-11-06 11:10:49.982669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.765 [2024-11-06 11:10:49.983349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 11:10:49.983389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.765 [2024-11-06 11:10:49.983400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.765 [2024-11-06 11:10:49.983639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.765 [2024-11-06 11:10:49.983874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.765 [2024-11-06 11:10:49.983885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.765 [2024-11-06 11:10:49.983893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.765 [2024-11-06 11:10:49.983901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.765 [2024-11-06 11:10:49.996476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.765 [2024-11-06 11:10:49.997027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 11:10:49.997048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.765 [2024-11-06 11:10:49.997061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.765 [2024-11-06 11:10:49.997282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.765 [2024-11-06 11:10:49.997502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.765 [2024-11-06 11:10:49.997511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.765 [2024-11-06 11:10:49.997518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.765 [2024-11-06 11:10:49.997525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.765 [2024-11-06 11:10:50.010455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.765 [2024-11-06 11:10:50.011137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.011178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.011189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.011428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.011652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.011662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.011671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.011679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.024462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.025113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.025153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.025164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.025403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.025628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.025638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.025646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.025655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.038447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.039035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.039055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.039064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.039284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.039510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.039519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.039526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.039533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.052306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.052761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.052780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.052788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.053007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.053228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.053237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.053244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.053251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.066253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.066779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.066797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.066805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.067024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.067244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.067254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.067262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.067270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.080051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.080611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.080628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.080636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.080861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.081081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.081090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.081102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.081110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.093880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.094563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.094602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.094614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.094862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.095087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.095097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.095105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.095113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.107800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.108434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.108473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.108484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.108724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.108956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.108968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.108976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.108984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.121743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.122393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.122433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.122444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.122683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.122917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.122929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.122937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.122945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.135722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.136429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 11:10:50.136469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.766 [2024-11-06 11:10:50.136480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.766 [2024-11-06 11:10:50.136719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.766 [2024-11-06 11:10:50.136962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.766 [2024-11-06 11:10:50.136975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.766 [2024-11-06 11:10:50.136983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.766 [2024-11-06 11:10:50.136991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.766 [2024-11-06 11:10:50.149557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.766 [2024-11-06 11:10:50.150202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 11:10:50.150240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.767 [2024-11-06 11:10:50.150252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.767 [2024-11-06 11:10:50.150491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.767 [2024-11-06 11:10:50.150715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.767 [2024-11-06 11:10:50.150725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.767 [2024-11-06 11:10:50.150734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.767 [2024-11-06 11:10:50.150742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.767 [2024-11-06 11:10:50.163523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.767 [2024-11-06 11:10:50.164194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 11:10:50.164233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.767 [2024-11-06 11:10:50.164245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.767 [2024-11-06 11:10:50.164484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.767 [2024-11-06 11:10:50.164708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.767 [2024-11-06 11:10:50.164718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.767 [2024-11-06 11:10:50.164726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.767 [2024-11-06 11:10:50.164735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:58.767 [2024-11-06 11:10:50.177501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:58.767 [2024-11-06 11:10:50.178216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 11:10:50.178256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:58.767 [2024-11-06 11:10:50.178272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:58.767 [2024-11-06 11:10:50.178512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:58.767 [2024-11-06 11:10:50.178735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:58.767 [2024-11-06 11:10:50.178756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:58.767 [2024-11-06 11:10:50.178765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:58.767 [2024-11-06 11:10:50.178773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.030 [2024-11-06 11:10:50.191327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3436622 Killed "${NVMF_APP[@]}" "$@" 00:28:59.030 [2024-11-06 11:10:50.191877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.030 [2024-11-06 11:10:50.191898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.030 [2024-11-06 11:10:50.191906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.030 [2024-11-06 11:10:50.192126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.030 [2024-11-06 11:10:50.192346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.030 [2024-11-06 11:10:50.192357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.030 [2024-11-06 11:10:50.192364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.030 [2024-11-06 11:10:50.192371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3438330 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3438330 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3438330 ']' 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:59.030 11:10:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.030 [2024-11-06 11:10:50.205130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.030 [2024-11-06 11:10:50.205787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.030 [2024-11-06 11:10:50.205833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.030 [2024-11-06 11:10:50.205846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.030 [2024-11-06 11:10:50.206089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.206316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.206326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.206335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.206356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.218932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.219633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.219672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.219683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.219929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.220155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.220165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.220173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.220182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.232806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.233474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.233514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.233525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.233773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.233998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.234009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.234018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.234026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.246813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.247490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.247529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.247540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.247791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.248017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.248027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.248035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.248043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.253364] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:28:59.031 [2024-11-06 11:10:50.253411] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.031 [2024-11-06 11:10:50.260828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.261378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.261398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.261406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.261627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.261853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.261862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.261870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.261877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.274631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.275282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.275322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.275333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.275571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.275804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.275815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.275823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.275832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.288680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.289379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.289419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.289432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.289679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.289911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.289923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.289931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.289940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.302498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.303081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.303102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.303110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.303330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.303550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.303559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.303566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.303573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.316338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.316888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.316906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.316914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.317134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.317354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.031 [2024-11-06 11:10:50.317363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.031 [2024-11-06 11:10:50.317371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.031 [2024-11-06 11:10:50.317378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.031 [2024-11-06 11:10:50.330138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.031 [2024-11-06 11:10:50.330737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.031 [2024-11-06 11:10:50.330785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.031 [2024-11-06 11:10:50.330798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.031 [2024-11-06 11:10:50.331038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.031 [2024-11-06 11:10:50.331263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.331278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.331287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.331295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.344072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.344753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.344793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.344804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.345043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.345267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.345278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.345286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.345295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.346339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.032 [2024-11-06 11:10:50.358081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.358800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.358842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.358855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.359096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.359320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.359331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.359339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.359347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.371935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.372575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.372614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.372625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.372873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.373098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.373108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.373116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.373132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.375389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.032 [2024-11-06 11:10:50.375412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.032 [2024-11-06 11:10:50.375419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.032 [2024-11-06 11:10:50.375425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.032 [2024-11-06 11:10:50.375429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.032 [2024-11-06 11:10:50.376592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.032 [2024-11-06 11:10:50.376751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.032 [2024-11-06 11:10:50.376764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.032 [2024-11-06 11:10:50.385913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.386623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.386663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.386675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.386925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.387151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.387161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.387169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.387178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.399737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.400450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.400490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.400502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.400742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.400975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.400986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.400994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.401002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.413555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.414245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.414285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.414296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.414542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.414776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.414787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.414796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.414804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.427364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.428070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.428110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.428122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.428361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.428585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.428596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.428604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.428612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.032 [2024-11-06 11:10:50.441184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.032 [2024-11-06 11:10:50.441793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.032 [2024-11-06 11:10:50.441821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.032 [2024-11-06 11:10:50.441830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.032 [2024-11-06 11:10:50.442056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.032 [2024-11-06 11:10:50.442277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.032 [2024-11-06 11:10:50.442286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.032 [2024-11-06 11:10:50.442294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.032 [2024-11-06 11:10:50.442301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.294 [2024-11-06 11:10:50.455071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.294 [2024-11-06 11:10:50.455668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.294 [2024-11-06 11:10:50.455686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.294 [2024-11-06 11:10:50.455694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.455919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.456140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.456155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.456163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.456170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.468940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.469471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.469490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.469498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.469717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.469943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.469953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.469961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.469969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.482923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.483454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.483470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.483478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.483697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.483921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.483932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.483939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.483946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.496906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.497402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.497420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.497427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.497646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.497874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.497885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.497892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.497903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.510863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.511545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.511584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.511596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.511842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.512067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.512078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.512085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.512094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.524854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.525562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.525602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.525613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.525860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.526085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.526095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.526103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.526112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.538678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.539313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.539353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.539364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.539602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.539834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.539846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.539854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.539862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 4674.50 IOPS, 18.26 MiB/s [2024-11-06T10:10:50.717Z] [2024-11-06 11:10:50.552676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.553345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.553385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.553396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.553635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.553865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.553877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.553885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.553893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.566673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.567273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.567294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.567303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.567523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.567742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.567757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.567764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.567771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.580528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.581050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.295 [2024-11-06 11:10:50.581068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.295 [2024-11-06 11:10:50.581075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.295 [2024-11-06 11:10:50.581294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.295 [2024-11-06 11:10:50.581514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.295 [2024-11-06 11:10:50.581524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.295 [2024-11-06 11:10:50.581531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.295 [2024-11-06 11:10:50.581538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.295 [2024-11-06 11:10:50.594496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.295 [2024-11-06 11:10:50.594990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.595007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.595015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.595239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.595459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.595468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.595475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.595482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.608448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.608945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.608964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.608971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.609190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.609409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.609419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.609428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.609434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.622429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.622993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.623010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.623019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.623237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.623457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.623466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.623473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.623480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.636229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.636846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.636886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.636897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.637136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.637361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.637376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.637384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.637392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.650163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.650756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.650777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.650785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.651004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.651224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.651234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.651242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.651248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.664019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.664573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.664591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.664599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.664824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.665045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.665054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.665062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.665068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.677819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.678486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.678526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.678538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.678787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.679011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.679022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.679031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.679044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.691608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.692156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.692177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.692185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.692405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.692626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.692636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.692643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.692651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.296 [2024-11-06 11:10:50.705422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.296 [2024-11-06 11:10:50.706103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.296 [2024-11-06 11:10:50.706143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.296 [2024-11-06 11:10:50.706155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.296 [2024-11-06 11:10:50.706398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.296 [2024-11-06 11:10:50.706622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.296 [2024-11-06 11:10:50.706632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.296 [2024-11-06 11:10:50.706640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.296 [2024-11-06 11:10:50.706648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.560 [2024-11-06 11:10:50.719421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.560 [2024-11-06 11:10:50.720136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.560 [2024-11-06 11:10:50.720175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.560 [2024-11-06 11:10:50.720186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.560 [2024-11-06 11:10:50.720425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.560 [2024-11-06 11:10:50.720651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.560 [2024-11-06 11:10:50.720661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.560 [2024-11-06 11:10:50.720669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.560 [2024-11-06 11:10:50.720678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.560 [2024-11-06 11:10:50.733235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.560 [2024-11-06 11:10:50.733984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.560 [2024-11-06 11:10:50.734024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.560 [2024-11-06 11:10:50.734035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.560 [2024-11-06 11:10:50.734274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.560 [2024-11-06 11:10:50.734497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.560 [2024-11-06 11:10:50.734508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.560 [2024-11-06 11:10:50.734516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.560 [2024-11-06 11:10:50.734524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.560 [2024-11-06 11:10:50.747090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.560 [2024-11-06 11:10:50.747759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.560 [2024-11-06 11:10:50.747798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.560 [2024-11-06 11:10:50.747810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.560 [2024-11-06 11:10:50.748051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.560 [2024-11-06 11:10:50.748275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.560 [2024-11-06 11:10:50.748285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.560 [2024-11-06 11:10:50.748293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.560 [2024-11-06 11:10:50.748301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.560 [2024-11-06 11:10:50.761081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.560 [2024-11-06 11:10:50.761773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.560 [2024-11-06 11:10:50.761813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.560 [2024-11-06 11:10:50.761825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.560 [2024-11-06 11:10:50.762066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.560 [2024-11-06 11:10:50.762291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.560 [2024-11-06 11:10:50.762301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.560 [2024-11-06 11:10:50.762309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.560 [2024-11-06 11:10:50.762317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.560 [2024-11-06 11:10:50.774890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.560 [2024-11-06 11:10:50.775582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.560 [2024-11-06 11:10:50.775622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.560 [2024-11-06 11:10:50.775633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.560 [2024-11-06 11:10:50.775885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.560 [2024-11-06 11:10:50.776110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.560 [2024-11-06 11:10:50.776120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.560 [2024-11-06 11:10:50.776128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.560 [2024-11-06 11:10:50.776136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.560 [2024-11-06 11:10:50.788688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.560 [2024-11-06 11:10:50.789247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.560 [2024-11-06 11:10:50.789287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.560 [2024-11-06 11:10:50.789298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.560 [2024-11-06 11:10:50.789537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.560 [2024-11-06 11:10:50.789769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.560 [2024-11-06 11:10:50.789780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.560 [2024-11-06 11:10:50.789788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.560 [2024-11-06 11:10:50.789796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.560 [2024-11-06 11:10:50.802560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.560 [2024-11-06 11:10:50.803264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.560 [2024-11-06 11:10:50.803304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.803315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.561 [2024-11-06 11:10:50.803553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.561 [2024-11-06 11:10:50.803786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.561 [2024-11-06 11:10:50.803798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.561 [2024-11-06 11:10:50.803805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.561 [2024-11-06 11:10:50.803813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.561 [2024-11-06 11:10:50.816365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.561 [2024-11-06 11:10:50.817075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.561 [2024-11-06 11:10:50.817115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.817126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.561 [2024-11-06 11:10:50.817365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.561 [2024-11-06 11:10:50.817589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.561 [2024-11-06 11:10:50.817605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.561 [2024-11-06 11:10:50.817613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.561 [2024-11-06 11:10:50.817621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.561 [2024-11-06 11:10:50.830183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.561 [2024-11-06 11:10:50.830486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.561 [2024-11-06 11:10:50.830505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.830514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.561 [2024-11-06 11:10:50.830733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.561 [2024-11-06 11:10:50.830958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.561 [2024-11-06 11:10:50.830968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.561 [2024-11-06 11:10:50.830976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.561 [2024-11-06 11:10:50.830982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.561 [2024-11-06 11:10:50.844160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.561 [2024-11-06 11:10:50.844729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.561 [2024-11-06 11:10:50.844752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.844760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.561 [2024-11-06 11:10:50.844979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.561 [2024-11-06 11:10:50.845199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.561 [2024-11-06 11:10:50.845208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.561 [2024-11-06 11:10:50.845215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.561 [2024-11-06 11:10:50.845222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.561 [2024-11-06 11:10:50.857972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.561 [2024-11-06 11:10:50.858615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.561 [2024-11-06 11:10:50.858654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.858665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.561 [2024-11-06 11:10:50.858912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.561 [2024-11-06 11:10:50.859137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.561 [2024-11-06 11:10:50.859147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.561 [2024-11-06 11:10:50.859155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.561 [2024-11-06 11:10:50.859168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.561 [2024-11-06 11:10:50.871811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.561 [2024-11-06 11:10:50.872361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.561 [2024-11-06 11:10:50.872381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.872389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.561 [2024-11-06 11:10:50.872609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.561 [2024-11-06 11:10:50.872836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.561 [2024-11-06 11:10:50.872847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.561 [2024-11-06 11:10:50.872854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.561 [2024-11-06 11:10:50.872861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.561 [2024-11-06 11:10:50.885613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.561 [2024-11-06 11:10:50.886147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.561 [2024-11-06 11:10:50.886187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.886198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.561 [2024-11-06 11:10:50.886437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.561 [2024-11-06 11:10:50.886661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.561 [2024-11-06 11:10:50.886672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.561 [2024-11-06 11:10:50.886680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.561 [2024-11-06 11:10:50.886689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.561 [2024-11-06 11:10:50.899462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.561 [2024-11-06 11:10:50.899994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.561 [2024-11-06 11:10:50.900032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.561 [2024-11-06 11:10:50.900045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.562 [2024-11-06 11:10:50.900286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.562 [2024-11-06 11:10:50.900510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.562 [2024-11-06 11:10:50.900520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.562 [2024-11-06 11:10:50.900528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.562 [2024-11-06 11:10:50.900537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.562 [2024-11-06 11:10:50.913312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.562 [2024-11-06 11:10:50.914055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.562 [2024-11-06 11:10:50.914095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.562 [2024-11-06 11:10:50.914106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.562 [2024-11-06 11:10:50.914345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.562 [2024-11-06 11:10:50.914569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.562 [2024-11-06 11:10:50.914579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.562 [2024-11-06 11:10:50.914587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.562 [2024-11-06 11:10:50.914595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.562 [2024-11-06 11:10:50.927159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.562 [2024-11-06 11:10:50.927875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.562 [2024-11-06 11:10:50.927914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.562 [2024-11-06 11:10:50.927927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.562 [2024-11-06 11:10:50.928167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.562 [2024-11-06 11:10:50.928391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.562 [2024-11-06 11:10:50.928402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.562 [2024-11-06 11:10:50.928410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.562 [2024-11-06 11:10:50.928419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.562 [2024-11-06 11:10:50.940996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.562 [2024-11-06 11:10:50.941550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.562 [2024-11-06 11:10:50.941570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.562 [2024-11-06 11:10:50.941578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.562 [2024-11-06 11:10:50.941804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.562 [2024-11-06 11:10:50.942026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.562 [2024-11-06 11:10:50.942036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.562 [2024-11-06 11:10:50.942043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.562 [2024-11-06 11:10:50.942051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.562 [2024-11-06 11:10:50.954806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.562 [2024-11-06 11:10:50.955469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.562 [2024-11-06 11:10:50.955509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.562 [2024-11-06 11:10:50.955520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.562 [2024-11-06 11:10:50.955772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.562 [2024-11-06 11:10:50.955998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.562 [2024-11-06 11:10:50.956008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.562 [2024-11-06 11:10:50.956016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.562 [2024-11-06 11:10:50.956025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.562 [2024-11-06 11:10:50.968806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.562 [2024-11-06 11:10:50.969249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.562 [2024-11-06 11:10:50.969269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.562 [2024-11-06 11:10:50.969278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.562 [2024-11-06 11:10:50.969498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.562 [2024-11-06 11:10:50.969719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.562 [2024-11-06 11:10:50.969729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.562 [2024-11-06 11:10:50.969737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.562 [2024-11-06 11:10:50.969751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:50.982720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:50.983407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:50.983446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:50.983458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:50.983697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:50.983930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:50.983942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:50.983952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:50.983961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:50.996518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:50.997137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:50.997177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:50.997188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:50.997428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:50.997652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:50.997667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:50.997675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:50.997684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:51.010468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:51.011115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:51.011157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:51.011168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:51.011407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:51.011632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:51.011642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:51.011650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:51.011658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:51.024438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:51.025093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:51.025133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:51.025144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:51.025383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:51.025607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:51.025618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:51.025626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:51.025634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:51.038411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:51.039133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:51.039173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:51.039184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:51.039423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:51.039647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:51.039659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:51.039667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:51.039682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:51.052245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:59.826 [2024-11-06 11:10:51.052783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:51.052823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:51.052836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:51.053078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:28:59.826 [2024-11-06 11:10:51.053302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:51.053313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:51.053321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:51.053330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.826 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:59.826 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.826 [2024-11-06 11:10:51.066152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:51.066809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:51.066850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:51.066864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:51.067108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:51.067332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:51.067344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:51.067353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:51.067364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:51.080148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:51.080828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:51.080868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:51.080880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:51.081123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:51.081347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.826 [2024-11-06 11:10:51.081358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.826 [2024-11-06 11:10:51.081371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.826 [2024-11-06 11:10:51.081379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.826 [2024-11-06 11:10:51.094154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.826 [2024-11-06 11:10:51.094700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-11-06 11:10:51.094721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.826 [2024-11-06 11:10:51.094729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.826 [2024-11-06 11:10:51.094955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.826 [2024-11-06 11:10:51.095177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.827 [2024-11-06 11:10:51.095186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.827 [2024-11-06 11:10:51.095194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.827 [2024-11-06 11:10:51.095201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.827 [2024-11-06 11:10:51.100199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.827 [2024-11-06 11:10:51.107959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.827 [2024-11-06 11:10:51.108356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-11-06 11:10:51.108375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.827 [2024-11-06 11:10:51.108383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.827 [2024-11-06 11:10:51.108602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.827 [2024-11-06 11:10:51.108829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.827 [2024-11-06 11:10:51.108840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.827 [2024-11-06 11:10:51.108848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.827 [2024-11-06 11:10:51.108855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.827 [2024-11-06 11:10:51.121826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.827 [2024-11-06 11:10:51.122496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-11-06 11:10:51.122536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.827 [2024-11-06 11:10:51.122552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.827 [2024-11-06 11:10:51.122801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.827 [2024-11-06 11:10:51.123025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.827 [2024-11-06 11:10:51.123036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.827 [2024-11-06 11:10:51.123045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.827 [2024-11-06 11:10:51.123054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.827 Malloc0 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.827 [2024-11-06 11:10:51.135884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.827 [2024-11-06 11:10:51.136391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-11-06 11:10:51.136431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.827 [2024-11-06 11:10:51.136443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.827 [2024-11-06 11:10:51.136682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.827 [2024-11-06 11:10:51.136915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.827 [2024-11-06 11:10:51.136927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.827 [2024-11-06 11:10:51.136935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.827 [2024-11-06 11:10:51.136943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.827 [2024-11-06 11:10:51.149736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.827 [2024-11-06 11:10:51.150261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-11-06 11:10:51.150300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.827 [2024-11-06 11:10:51.150312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.827 [2024-11-06 11:10:51.150551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.827 [2024-11-06 11:10:51.150786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.827 [2024-11-06 11:10:51.150797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.827 [2024-11-06 11:10:51.150805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.827 [2024-11-06 11:10:51.150818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.827 [2024-11-06 11:10:51.163599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.827 [2024-11-06 11:10:51.164270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-11-06 11:10:51.164309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df1000 with addr=10.0.0.2, port=4420 00:28:59.827 [2024-11-06 11:10:51.164321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1000 is same with the state(6) to be set 00:28:59.827 [2024-11-06 11:10:51.164559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df1000 (9): Bad file descriptor 00:28:59.827 [2024-11-06 11:10:51.164791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:59.827 [2024-11-06 11:10:51.164803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:59.827 [2024-11-06 11:10:51.164811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:59.827 [2024-11-06 11:10:51.164820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:59.827 [2024-11-06 11:10:51.165490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.827 11:10:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3437031 00:28:59.827 [2024-11-06 11:10:51.177588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:59.827 [2024-11-06 11:10:51.244810] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:01.342 4503.57 IOPS, 17.59 MiB/s [2024-11-06T10:10:53.707Z] 5312.50 IOPS, 20.75 MiB/s [2024-11-06T10:10:54.649Z] 5970.78 IOPS, 23.32 MiB/s [2024-11-06T10:10:55.591Z] 6486.80 IOPS, 25.34 MiB/s [2024-11-06T10:10:56.976Z] 6899.00 IOPS, 26.95 MiB/s [2024-11-06T10:10:57.919Z] 7269.92 IOPS, 28.40 MiB/s [2024-11-06T10:10:58.860Z] 7580.85 IOPS, 29.61 MiB/s [2024-11-06T10:10:59.805Z] 7834.71 IOPS, 30.60 MiB/s [2024-11-06T10:10:59.805Z] 8053.60 IOPS, 31.46 MiB/s 00:29:08.383 Latency(us) 00:29:08.383 [2024-11-06T10:10:59.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:08.383 Verification LBA range: start 0x0 length 0x4000 00:29:08.383 Nvme1n1 : 15.01 8055.99 31.47 9882.93 0.00 7109.93 791.89 14308.69 00:29:08.383 [2024-11-06T10:10:59.805Z] =================================================================================================================== 00:29:08.383 [2024-11-06T10:10:59.805Z] Total : 8055.99 31.47 9882.93 0.00 7109.93 791.89 14308.69 00:29:08.383 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:08.383 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:08.383 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.383 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.384 rmmod nvme_tcp 00:29:08.384 rmmod nvme_fabrics 00:29:08.384 rmmod nvme_keyring 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3438330 ']' 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3438330 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3438330 ']' 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3438330 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:08.384 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3438330 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3438330' 00:29:08.647 killing process with pid 3438330 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3438330 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3438330 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.647 11:10:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.193 00:29:11.193 real 0m27.823s 00:29:11.193 user 1m2.613s 00:29:11.193 sys 0m7.387s 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.193 ************************************ 00:29:11.193 END TEST nvmf_bdevperf 00:29:11.193 ************************************ 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.193 ************************************ 00:29:11.193 START TEST nvmf_target_disconnect 00:29:11.193 ************************************ 00:29:11.193 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:11.193 * Looking for test storage... 00:29:11.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.194 --rc genhtml_branch_coverage=1 00:29:11.194 --rc genhtml_function_coverage=1 00:29:11.194 --rc genhtml_legend=1 00:29:11.194 --rc geninfo_all_blocks=1 00:29:11.194 --rc geninfo_unexecuted_blocks=1 00:29:11.194 00:29:11.194 ' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.194 --rc genhtml_branch_coverage=1 00:29:11.194 --rc genhtml_function_coverage=1 00:29:11.194 --rc genhtml_legend=1 00:29:11.194 --rc geninfo_all_blocks=1 00:29:11.194 --rc geninfo_unexecuted_blocks=1 00:29:11.194 00:29:11.194 ' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.194 --rc genhtml_branch_coverage=1 00:29:11.194 --rc genhtml_function_coverage=1 00:29:11.194 --rc genhtml_legend=1 00:29:11.194 --rc geninfo_all_blocks=1 00:29:11.194 --rc geninfo_unexecuted_blocks=1 00:29:11.194 00:29:11.194 ' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.194 --rc genhtml_branch_coverage=1 00:29:11.194 --rc genhtml_function_coverage=1 00:29:11.194 --rc genhtml_legend=1 00:29:11.194 --rc geninfo_all_blocks=1 00:29:11.194 --rc geninfo_unexecuted_blocks=1 00:29:11.194 00:29:11.194 ' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.194 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.195 11:11:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:17.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:17.789 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:17.789 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.789 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:17.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.790 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:29:18.053 00:29:18.053 --- 10.0.0.2 ping statistics --- 00:29:18.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.053 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:29:18.053 00:29:18.053 --- 10.0.0.1 ping statistics --- 00:29:18.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.053 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.053 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:18.313 ************************************ 00:29:18.313 START TEST nvmf_target_disconnect_tc1 00:29:18.313 ************************************ 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.313 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.314 [2024-11-06 11:11:09.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.314 [2024-11-06 11:11:09.610040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bbad0 with addr=10.0.0.2, port=4420 00:29:18.314 [2024-11-06 11:11:09.610070] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:18.314 [2024-11-06 11:11:09.610082] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:18.314 [2024-11-06 11:11:09.610090] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:18.314 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:18.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:18.314 Initializing NVMe Controllers 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:18.314 00:29:18.314 real 0m0.128s 00:29:18.314 user 0m0.057s 00:29:18.314 sys 0m0.070s 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:18.314 ************************************ 00:29:18.314 END TEST nvmf_target_disconnect_tc1 00:29:18.314 ************************************ 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:18.314 ************************************ 00:29:18.314 START TEST nvmf_target_disconnect_tc2 00:29:18.314 ************************************ 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3444368 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3444368 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3444368 ']' 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:18.314 11:11:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.575 [2024-11-06 11:11:09.768677] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:29:18.575 [2024-11-06 11:11:09.768743] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.575 [2024-11-06 11:11:09.867995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.575 [2024-11-06 11:11:09.920227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.575 [2024-11-06 11:11:09.920282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.575 [2024-11-06 11:11:09.920296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.575 [2024-11-06 11:11:09.920303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.575 [2024-11-06 11:11:09.920309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.575 [2024-11-06 11:11:09.922677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:18.575 [2024-11-06 11:11:09.922836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:18.575 [2024-11-06 11:11:09.923146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:18.575 [2024-11-06 11:11:09.923149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.595 Malloc0 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.595 [2024-11-06 11:11:10.695527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.595 [2024-11-06 11:11:10.735940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3444418 00:29:19.595 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:19.596 11:11:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.516 11:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3444368 00:29:21.516 11:11:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Read completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 Write completed with error (sct=0, sc=8) 00:29:21.516 starting I/O failed 00:29:21.516 [2024-11-06 11:11:12.769655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.516 [2024-11-06 11:11:12.770177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.770210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.770570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.770579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.770974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.771001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.771301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.771311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.771612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.771621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.772031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.772061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.772268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.772279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.772489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.772498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.772783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.772792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.773081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.773090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.516 qpair failed and we were unable to recover it. 00:29:21.516 [2024-11-06 11:11:12.773294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.516 [2024-11-06 11:11:12.773303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.773504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.773512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.773824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.773834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.774026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.774036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.774342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.774350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.774672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.774681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.775002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.775011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.775289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.775297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.775578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.775587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.775906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.775915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.776206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.776214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.776538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.776547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.776884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.776893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.777191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.777202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.777514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.777523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.777841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.777850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.778169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.778178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.778595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.778612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.778953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.778962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.779328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.779342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.779677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.779687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.779933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.779942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.780249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.780258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.780538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.780547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.780830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.780838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.781159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.781167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.781477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.781485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.781816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.781825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.781917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.781926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.782119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.782128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.782436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.782444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.782767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.782775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.783133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.783142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.783414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.783422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.783695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.783704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.784034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.784042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.784369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.784378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.517 qpair failed and we were unable to recover it. 00:29:21.517 [2024-11-06 11:11:12.784670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.517 [2024-11-06 11:11:12.784678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.785007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.785016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.785300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.785308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.785582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.785590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.785914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.785923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.786232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.786241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.786578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.786586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.786801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.786809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.787150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.787159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.787459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.787468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.787682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.787691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.787859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.787867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.788173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.788182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.788491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.788500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.788832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.788842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.789166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.789175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.789462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.789473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.789822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.789830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.790115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.790123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.790420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.790428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.790749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.790758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.791073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.791081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.791365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.791372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.791652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.791659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.791844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.791851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.792160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.792168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.792501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.792510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.792706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.792715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.793036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.793045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.793365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.793374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.793652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.793661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.794024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.794033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.794219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.794229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.794527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.794536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.794856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.794865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.795205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.518 [2024-11-06 11:11:12.795213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.518 qpair failed and we were unable to recover it. 00:29:21.518 [2024-11-06 11:11:12.795540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.795548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.795843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.795851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.796116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.796124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.796412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.796420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.796579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.796587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.796791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.796799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.797136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.797144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.797426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.797434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.797592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.797600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.797941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.797950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.798272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.798282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.798564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.798572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.798771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.798779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.799119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.799127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.799402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.799410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.799715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.799724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.800101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.800110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.800408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.800416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.800713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.800721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.801074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.801370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.801380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.801669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.801678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.802032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.802040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.802327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.802335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.802502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.802510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.802698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.802708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.802991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.803000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.803186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.803195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.803563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.803572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.803828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.803837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.804127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.804135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.804457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.804464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.804783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.804792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.805072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.805080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.805406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.805415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.805731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.805740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.519 qpair failed and we were unable to recover it. 00:29:21.519 [2024-11-06 11:11:12.806056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.519 [2024-11-06 11:11:12.806065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.806399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.806408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.806599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.806609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.806919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.806928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.807200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.807209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.807507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.807516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.807725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.807734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.808041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.808050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.808331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.808340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.808677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.808686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.808952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.808961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.809111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.809120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.809391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.809400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.809671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.809679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.810011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.810020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.810185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.810195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.810525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.810533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.810869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.810878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.811240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.811247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.811549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.811558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.811878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.811887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.812061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.812070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.812394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.812404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.812706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.812714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.813056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.813067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.813361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.813369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.813535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.813544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.813742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.813754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.814058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.814066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.814395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.814403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.814608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.814617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.814946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.814954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.815294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.815301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.815614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.815622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.815933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.815941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.816313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.816321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.816623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.816633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.816937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.816946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.520 [2024-11-06 11:11:12.817109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.520 [2024-11-06 11:11:12.817117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.520 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.817307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.817315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.817607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.817615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.817934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.817943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.818274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.818282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.818545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.818553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.818771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.818779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.819057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.819065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.819398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.819406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.819723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.819732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.819939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.819947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.820255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.820263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.820538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.820546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.820866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.820874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.821173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.821182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.821367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.821375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.821559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.821567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.821844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.821853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.822159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.822168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.822469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.822477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.822790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.822798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.823133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.823141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.823465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.823473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.823777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.823786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.824100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.824108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.824405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.824414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.824509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.521 [2024-11-06 11:11:12.824518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.521 qpair failed and we were unable to recover it. 00:29:21.521 [2024-11-06 11:11:12.824791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.824799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.825113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.825122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.825331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.825340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.825644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.825653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.826062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.826070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.826379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.826388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.826592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.826600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.826868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.826876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.827177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.827185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.827514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.827523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.827739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.827751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.828005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.828013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.828294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.828303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.828615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.828624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.828928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.828938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.829229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.829239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.829340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.829348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.829668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.829676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.829917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.829927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.830222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.830230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.830440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.830449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.830727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.830735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.831052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.831061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.831277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.831286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.831411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.831420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.831618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.831627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.831943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.831952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.832271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.832528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.832537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.832908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.832917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.833207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.833217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.833544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.833553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.833868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.833877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.834172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.834181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.834515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.834524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.834771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.834780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.835092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.835115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.522 [2024-11-06 11:11:12.835423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.522 [2024-11-06 11:11:12.835437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.522 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.835743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.835758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.836067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.836077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.836184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.836191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.836478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.836485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.836795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.836803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.837043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.837051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.837372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.837379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.837552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.837560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.837754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.837761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.838037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.838045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.838360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.838367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.838657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.838664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.838975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.838984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.839288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.839295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.839598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.839605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.839935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.839943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.840260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.840268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.840595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.840602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.840908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.840920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.841261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.841269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.841557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.841571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.841878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.841887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.842175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.842184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.842523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.842530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.842850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.842858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.843043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.843051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.843364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.843371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.843738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.843748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.843955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.843962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.844301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.844309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.844598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.844606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.844823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.844831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.845171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.845178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.845520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.845528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.845806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.845813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.846042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.846048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.846348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.523 [2024-11-06 11:11:12.846355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.523 qpair failed and we were unable to recover it. 00:29:21.523 [2024-11-06 11:11:12.846675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.846682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.846911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.846918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.847242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.847249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.847544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.847552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.847869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.847879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.848188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.848196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.848523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.848530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.848739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.848749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.849039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.849047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.849371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.849379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.849676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.849683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.850000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.850008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.850238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.850244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.850553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.850559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.850831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.850840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.851137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.851144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.851455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.851463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.851797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.851804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.852146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.852153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.852477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.852485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.852771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.852779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.853125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.853132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.853432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.853440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.853741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.853751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.854061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.854069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.854324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.854332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.854633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.854642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.854961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.854969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.855278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.855285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.855587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.855595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.855894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.855902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.856208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.856217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.856426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.856434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.856726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.856734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.857044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.524 [2024-11-06 11:11:12.857052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.524 qpair failed and we were unable to recover it. 00:29:21.524 [2024-11-06 11:11:12.857353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.857360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.857705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.857713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.857951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.857959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.858264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.858272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.858583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.858591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.858886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.858893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.859223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.859230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.859518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.859526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.859871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.859878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.860097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.860106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.860403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.860410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.860724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.860731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.861027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.861034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.861394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.861402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.861712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.861719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.862028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.862036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.862326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.862332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.862645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.862652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.862952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.862960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.863266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.863272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.863563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.863570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.863884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.863892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.864215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.864222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.864529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.864537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.864857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.864865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.865151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.865158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.865412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.865419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.865764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.865772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.866124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.866131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.866442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.866449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.866789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.866797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.867095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.867102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.867366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.867373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.525 qpair failed and we were unable to recover it. 00:29:21.525 [2024-11-06 11:11:12.867658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.525 [2024-11-06 11:11:12.867665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.867964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.867971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.868281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.868288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.868589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.868596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.868902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.868909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.869227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.869234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.869535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.869542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.869806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.869813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.870123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.870130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.870299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.870306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.870607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.870614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.871008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.871016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.871309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.871316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.871642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.871650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.871969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.871977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.872279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.872286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.872485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.872493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.872798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.872806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.873108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.873115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.873452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.873459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.873749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.873757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.874070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.874077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.874263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.874270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.874534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.874541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.874858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.874866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.875249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.875256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.875512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.875519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.875611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.875617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.875938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.875946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.876298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.876304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.876661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.876669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.876861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.876868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.877192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.877199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.877500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.877507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.877801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.877808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.878101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.878108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.878415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.878421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.878724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.878731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.526 [2024-11-06 11:11:12.879043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.526 [2024-11-06 11:11:12.879050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.526 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.879319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.879325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.879671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.879678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.880039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.880047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.880339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.880346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.880678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.880688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.880986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.880994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.881305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.881313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.881539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.881547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.881861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.881868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.882179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.882186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.882502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.882510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.882821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.882828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.883164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.883171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.883372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.883379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.883671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.883678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.883991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.883998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.884321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.884327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.884627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.884633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.884970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.884978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.885145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.885153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.885373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.885379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.885760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.885768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.886077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.886084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.886383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.886390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.886723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.886730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.886938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.886945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.887131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.887138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.887448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.887454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.887830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.887838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.888132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.888139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.888391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.888398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.888740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.888751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.889042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.889049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.889332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.889339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.889599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.889605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.889905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.889912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.890200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.890207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.890540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.527 [2024-11-06 11:11:12.890548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.527 qpair failed and we were unable to recover it. 00:29:21.527 [2024-11-06 11:11:12.890730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.890737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.890949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.890957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.891360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.891367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.891695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.891703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.891914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.891922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.892110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.892117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.892303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.892313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.892514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.892521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.892804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.892811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.893139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.893146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.893550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.893557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.893734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.893742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.894100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.894108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.894398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.894405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.894734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.894741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.895094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.895102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.895295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.895302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.895494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.895502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.895835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.895842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.896255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.896262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.896557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.896565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.896898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.896905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.897104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.897111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.897199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.897207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.897489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.897496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.897690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.897698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.897999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.898007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.898392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.898399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.898696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.898703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.899029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.899036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.899342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.899349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.899685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.899692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.900025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.900032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.900375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.900383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.900726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.900733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.901046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.901053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.901383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.901390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.901717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.901723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.528 qpair failed and we were unable to recover it. 00:29:21.528 [2024-11-06 11:11:12.902010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.528 [2024-11-06 11:11:12.902017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.902334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.902342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.902641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.902648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.902970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.902977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.903282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.903289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.903577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.903584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.903897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.903904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.904220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.904227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.904517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.904525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.904813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.904821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.905135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.905142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.905440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.905448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.905765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.905772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.906086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.906093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.906392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.906398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.906706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.906714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.906914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.906921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.907256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.907263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.907557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.907563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.907873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.907880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.908192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.908199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.908484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.908494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.908802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.908810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.909106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.909113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.909431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.909438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.909759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.909767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.910040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.910047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.910356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.910362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.910663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.910671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.910979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.910988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.911303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.911310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.911636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.911644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.911934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.911942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.912245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.912253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.912552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.912560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.912870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.912878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.913190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.913198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.529 [2024-11-06 11:11:12.913481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.529 [2024-11-06 11:11:12.913490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.529 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.913815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.913823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.914137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.914144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.914455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.914462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.914812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.914829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.915155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.915163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.915469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.915476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.915776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.915783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.916082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.916089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.916293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.916299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.916656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.916663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.916942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.916951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.917155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.917161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.917473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.917480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.917795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.917802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.918097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.918104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.918392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.918398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.918711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.918718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.919024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.919039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.919337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.919649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.919656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.919969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.919976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.920270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.920277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.920593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.920599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.920912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.920919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.921251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.921257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.921565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.921572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.921891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.921898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.922210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.922217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.922526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.922534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.922772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.922780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.923070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.923077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.923361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.923368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.923677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.530 [2024-11-06 11:11:12.923684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.530 qpair failed and we were unable to recover it. 00:29:21.530 [2024-11-06 11:11:12.924030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.924038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.924378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.924385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.924692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.924698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.924986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.924993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.925293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.925300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.925606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.925613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.926423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.926440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.927046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.927054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.927347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.927354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.927681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.927903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.927910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.531 [2024-11-06 11:11:12.928190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.531 [2024-11-06 11:11:12.928198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.531 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.928526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.928534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.928812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.928820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.929141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.929148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.929454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.929461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.929775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.929784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.930085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.930094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.930401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.930408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.930689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.930696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.931011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.931020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.931281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.931289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.931597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.931605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.931819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.931826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.932121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.932127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.932443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.932450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.932731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.932738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.933044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.933052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.933238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.933246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.933536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.933544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.933865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.933872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.934171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.934178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.934469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.934475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.934773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.934780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.935159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.806 [2024-11-06 11:11:12.935165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.806 qpair failed and we were unable to recover it. 00:29:21.806 [2024-11-06 11:11:12.935546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.935553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.935845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.935852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.936021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.936028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.936343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.936350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.936633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.936640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.936937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.936944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.937241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.937256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.937607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.937614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.937826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.937833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.938253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.938260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.938575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.938582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.938880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.938888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.939197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.939203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.939510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.939518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.939811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.939818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.940113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.940120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.940422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.940429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.940735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.940742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.941031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.941039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.941347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.941355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.941665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.941672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.941960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.941969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.942273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.942280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.942591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.942598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.942904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.942912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.943219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.943226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.943541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.943549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.943863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.943871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.944177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.944184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.944480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.944487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.944777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.944784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.945087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.945094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.945407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.945413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.945705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.945712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.946028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.946035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.946350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.946357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.946568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.946574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.946871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.807 [2024-11-06 11:11:12.946878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.807 qpair failed and we were unable to recover it. 00:29:21.807 [2024-11-06 11:11:12.947236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.947244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.947556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.947564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.947814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.947821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.948138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.948145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.948467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.948474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.948788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.948795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.949082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.949089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.949289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.949296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.949604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.949611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.949906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.949913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.950232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.950239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.950550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.950557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.950889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.950896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.951188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.951195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.951457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.951463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.951756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.951763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.952070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.952077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.952375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.952382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.952695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.952702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.952993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.953001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.953307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.953314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.953619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.953626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.953934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.953941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.954251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.954259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.954571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.954579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.954880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.954888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.955194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.955202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.955389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.955397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.955705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.955711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.956018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.956025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.956335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.956342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.956631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.956638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.956914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.956921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.957220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.957227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.957537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.957543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.957830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.957838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.958152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.958159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.958462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.808 [2024-11-06 11:11:12.958469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.808 qpair failed and we were unable to recover it. 00:29:21.808 [2024-11-06 11:11:12.958775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.958782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.959087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.959094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.959437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.959443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.959729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.959736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.960011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.960019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.960309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.960316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.960621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.960630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.960918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.960927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.961261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.961269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.961576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.961584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.961787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.961795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.962093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.962099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.962404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.962410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.962717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.962724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.963087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.963093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.963398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.963405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.963710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.963716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.964017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.964024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.964348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.964355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.964667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.964674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.964977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.964984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.965290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.965297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.965621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.965627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.965930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.965937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.966246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.966253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.966571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.966581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.966881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.966888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.967213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.967220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.967533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.967541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.967832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.967839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.968146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.968153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.968464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.968470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.968786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.968793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.969102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.969110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.969415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.969423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.969618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.969626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.969920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.969927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.970090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.970098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.809 [2024-11-06 11:11:12.970435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.809 [2024-11-06 11:11:12.970442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.809 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.970754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.970761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.971072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.971375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.971390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.971723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.971730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.972042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.972049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.972341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.972348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.972664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.972671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.972986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.972993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.973290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.973298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.973605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.973612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.973917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.973924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.974258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.974265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.974547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.974554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.974732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.974740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.975064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.975073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.975394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.975402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.975589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.975597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.975907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.975914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.976213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.976220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.976527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.976533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.976842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.976850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.977175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.977181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.977548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.977556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.977847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.977854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.978145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.978152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.978456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.978462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.978751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.978760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.979132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.979139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.979417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.979424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.979616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.979623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.980005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.980012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.980321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.980328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-11-06 11:11:12.980703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.810 [2024-11-06 11:11:12.980709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.981043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.981050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.981356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.981363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.981699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.981706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.981930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.981936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.982260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.982267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.982585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.982592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.982891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.982898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.983224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.983231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.983542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.983549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.983861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.983868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.984178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.984185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.984473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.984480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.984799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.984806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.985023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.985030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.985326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.985334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.985643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.985650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.985947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.985955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.986307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.986315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.986624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.986631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.986825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.986834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.987148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.987156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.987440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.987447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.987625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.987632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.987943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.987950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.988263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.988270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.988590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.988597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.988908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.988915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.989172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.989179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.989562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.989568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.989849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.989856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.990041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.990048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.990266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.990274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.990635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.990643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.990875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.990883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.991097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.991104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.991381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.991388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.991694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.991701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-11-06 11:11:12.991981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.811 [2024-11-06 11:11:12.991988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.992160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.992168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.992568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.992575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.992919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.992927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.993235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.993242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.993551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.993849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.993856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.994239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.994247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.994547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.994555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.994866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.994874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.995216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.995224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.995543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.995550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.995754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.995762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.996072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.996079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.996275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.996281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.996480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.996488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.996790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.996797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.997111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.997124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.997307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.997314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.997606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.997613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.998001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.998008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.998281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.998288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.998579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.998587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.998935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.998944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.999336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.999342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.999637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.999644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:12.999973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:12.999980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.000287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.000294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.000585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.000592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.000909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.000917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.001259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.001266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.001559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.001567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.001879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.001886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.002096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.002104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.002401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.002408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.002727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.002733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.002896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.002905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.003216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.003224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.003532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.812 [2024-11-06 11:11:13.003539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.812 qpair failed and we were unable to recover it. 00:29:21.812 [2024-11-06 11:11:13.003850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.003857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.004154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.004161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.004471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.004478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.004862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.004870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.005188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.005195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.005387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.005395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.005588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4e00 is same with the state(6) to be set 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Read completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 Write completed with error (sct=0, sc=8) 00:29:21.813 starting I/O failed 00:29:21.813 [2024-11-06 11:11:13.006041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.813 [2024-11-06 11:11:13.006403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.006422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.006695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.006706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.007034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.007047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.007265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.007275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.007580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.007591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.007932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.007944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.008261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.008271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.008454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.008466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.008789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.008799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.009105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.009116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.009380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.009390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.009671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.009681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.009976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.009987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.010335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.010346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.010656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.010667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.010987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.010999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.011307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.011317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.011586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.011596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.011921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.813 [2024-11-06 11:11:13.011932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.813 qpair failed and we were unable to recover it. 00:29:21.813 [2024-11-06 11:11:13.012256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.012266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.012599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.012610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.012923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.012934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.013200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.013210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.013527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.013538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.013764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.013778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.014118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.014128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.014326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.014337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.014653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.014663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.015001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.015013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.015301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.015311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.015605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.015616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.015920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.015931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.016235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.016245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.016569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.016578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.016888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.016898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.017160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.017170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.017430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.017441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.017773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.017784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.018098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.018115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.018441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.018451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.018831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.018851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.019163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.019174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.019475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.019485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.019823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.019833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.020158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.020168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.020378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.020389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.020657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.020667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.021023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.021033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.021374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.021384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.021687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.021697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.022024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.022036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.022344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.022354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.022694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.022704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.814 [2024-11-06 11:11:13.022993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.814 [2024-11-06 11:11:13.023003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.814 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.023285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.023295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.023631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.023641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.023981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.023993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.024307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.024318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.024546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.024557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.024878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.024889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.025202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.025212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.025554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.025807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.025817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.026067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.026077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.026408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.026418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.026634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.026644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.026839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.026850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.027193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.027203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.027509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.027519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.027809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.027819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.028216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.028227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.028558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.028570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.028882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.028893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.029180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.029190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.029508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.029519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.029851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.029861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.030176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.030186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.030447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.030456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.030755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.030766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.031078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.031089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.031419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.031428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.031629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.031638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.031935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.031946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.032135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.032144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.032470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.032480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.032901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.032913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.033202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.815 [2024-11-06 11:11:13.033213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.815 qpair failed and we were unable to recover it. 00:29:21.815 [2024-11-06 11:11:13.033542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.033552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.033833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.033845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.034168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.034178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.034485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.034496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.034830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.034841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.035131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.035143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.035451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.035461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.035772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.035782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.036102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.036112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.036432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.036441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.036654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.036664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.037010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.037021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.037344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.037354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.037641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.037651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.038005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.038016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.038341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.038350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.038539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.038548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.038878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.038889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.039194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.039204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.039540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.039551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.039888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.039899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.040121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.040131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.040409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.040420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.040682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.040692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.040988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.040998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.041354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.041364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.041659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.041669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.041848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.041859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.042071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.042080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.042278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.042287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.042649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.042659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.042996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.043006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.043170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.043191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.043497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.043507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.043811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.043821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.044097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.044107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.044324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.044333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.044641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.044650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.816 [2024-11-06 11:11:13.044981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.816 [2024-11-06 11:11:13.044992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.816 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.045302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.045311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.045498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.045510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.045851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.045862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.046029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.046039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.046262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.046273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.046477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.046487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.046830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.046841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.047173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.047189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.047517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.047529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.047839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.047851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.048189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.048198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.048497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.048506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.048819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.048830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.049117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.049127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.049459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.049469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.049768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.049779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.049981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.049991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.050194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.050205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.050531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.050541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.050818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.050829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.051133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.051143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.051462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.051471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.051763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.051774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.052102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.052111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.052404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.052414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.052618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.052628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.052956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.052966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.053155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.053165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.053549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.053559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.053880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.053891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.054206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.054216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.054503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.054513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.054911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.054922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.055251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.055261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.055496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.055506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.055686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.055696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.056014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.056025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.056314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.056325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.817 [2024-11-06 11:11:13.056639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.817 [2024-11-06 11:11:13.056649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.817 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.056970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.056980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.057281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.057290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.057546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.057556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.057821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.057833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.058132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.058142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.058513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.058524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.058833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.058845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.059153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.059164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.059495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.059506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.059816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.059828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.060159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.060170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.060472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.060483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.060786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.060797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.061105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.061116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.061449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.061459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.061666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.061676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.061976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.061988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.062322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.062333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.062534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.062545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.062847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.062859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.063189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.063200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.063403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.063413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.063732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.063758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.064092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.064103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.064402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.064413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.064756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.064767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.065097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.065108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.065400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.065411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.065638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.065648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.065952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.065964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.066281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.066291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.066626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.066637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.066963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.066974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.067308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.067319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.067608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.067618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.067923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.067934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.068266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.068277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.818 [2024-11-06 11:11:13.068612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.818 [2024-11-06 11:11:13.068623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.818 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.068928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.068939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.069225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.069235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.069590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.069601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.069932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.069943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.070313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.070324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.070658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.070669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.070976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.070987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.071161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.071171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.071468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.071479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.071809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.071821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.072017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.072027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.072324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.072337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.072640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.072651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.072976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.072987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.073268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.073279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.073618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.073628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.073919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.073930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.074219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.074229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.074557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.074567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.074877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.074890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.075219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.075229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.075425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.075435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.075605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.075616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.075765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.075775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.075972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.075983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.076310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.076319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.076486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.076497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.076844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.076854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.077138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.077147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.077363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.077373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.077736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.077750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.077925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.077935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.078236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.078246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.078569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.078578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.078955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.078965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.079264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.079273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.079514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.079524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.079848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.079857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.819 qpair failed and we were unable to recover it. 00:29:21.819 [2024-11-06 11:11:13.080157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.819 [2024-11-06 11:11:13.080171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.080489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.080498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.080719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.080728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.081046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.081056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.081374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.081383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.081645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.081655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.081961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.081972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.082275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.082285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.082498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.082508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.082849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.082860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.083154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.083164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.083372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.083382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.083709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.083720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.083938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.083949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.084285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.084296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.084591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.084601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.084910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.084921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.085132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.085143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.085366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.085377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.085594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.085604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.085884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.085895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.086240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.086250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.086453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.086463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.086637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.086648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.086875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.086886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.087222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.087233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.087543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.087554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.087835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.820 [2024-11-06 11:11:13.087846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.820 qpair failed and we were unable to recover it. 00:29:21.820 [2024-11-06 11:11:13.088181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.088192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.088510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.088520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.088818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.088829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.089150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.089160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.089348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.089359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.089619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.089629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.089958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.089969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.090276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.090287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.090621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.090631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.090935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.090946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.091251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.091261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.091581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.091591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.091868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.091880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.092062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.092073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.092397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.092407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.092673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.092684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.092984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.092995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.093194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.093204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.093400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.093411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.093765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.093777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.094086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.094096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.094380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.094390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.094694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.094704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.094998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.095009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.095202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.095211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.095492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.095502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.095843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.095853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.096146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.096157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.096484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.096494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.096787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.096798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.097088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.097098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.097393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.097402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.097698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.097707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.098034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.098044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.098243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.098253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.821 [2024-11-06 11:11:13.098574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.821 [2024-11-06 11:11:13.098584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.821 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.098896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.098906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.099193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.099203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.099560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.099569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.099902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.099913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.100191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.100203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.100484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.100494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.100810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.100821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.101141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.101151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.101470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.101479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.101838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.101848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.102141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.102150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.102351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.102361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.102669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.102679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.102976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.102986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.103234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.103243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.103584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.103594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.103912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.103922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.104209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.104218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.104543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.104553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.104879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.104896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.105058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.105068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.105441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.105451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.105659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.105668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.105957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.105968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.106277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.106287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.106504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.106514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.106887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.106898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.107217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.107227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.107561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.107570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.107863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.107873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.108233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.108244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.108535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.108546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.108891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.108902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.109210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.109221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.109540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.109550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.109893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.109903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.110185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.110195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.822 qpair failed and we were unable to recover it. 00:29:21.822 [2024-11-06 11:11:13.110500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.822 [2024-11-06 11:11:13.110511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.110790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.110800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.111167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.111178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.111484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.111494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.111835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.111846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.112171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.112180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.112495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.112504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.112807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.112817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.113124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.113134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.113444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.113454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.113764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.113774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.113951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.113961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.114285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.114295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.114574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.114584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.114863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.114873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.115199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.115208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.115480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.115490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.115801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.116133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.116143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.116427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.116436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.116768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.116778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.116966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.116976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.117304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.117314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.117470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.117481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.117804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.117815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.118187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.118197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.118442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.118451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.118767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.118777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.119094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.119104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.119306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.823 [2024-11-06 11:11:13.119316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.823 qpair failed and we were unable to recover it. 00:29:21.823 [2024-11-06 11:11:13.119645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.119655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.120050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.120061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.120384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.120393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.120672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.120682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.120995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.121005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.121312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.121322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.121622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.121632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.121952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.121963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.122262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.122273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.122571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.122582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.122895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.122905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.123193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.123203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.123475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.123484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.123789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.123799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.124107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.124117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.124407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.124417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.124758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.124768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.125097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.125106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.125413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.125422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.125703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.125712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.126035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.126046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.126356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.126366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.126696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.126706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.127009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.127020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.127322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.127333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.127660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.127670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.127973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.127984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.128277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.128287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.128610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.128621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.128923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.128934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.129220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.129230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.129504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.129514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.129829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.129842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.130149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.130158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.130419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.130429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.130743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.130762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.131126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.131136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.131406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.824 [2024-11-06 11:11:13.131415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.824 qpair failed and we were unable to recover it. 00:29:21.824 [2024-11-06 11:11:13.131715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.131725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.132032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.132042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.132315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.132325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.132670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.132680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.132953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.132963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.133251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.133261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.133588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.133599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.133913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.133923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.134203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.134212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.134536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.134547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.134848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.134859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.135067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.135077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.135400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.135409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.135721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.135730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.136057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.136067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.136334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.136344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.136710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.136719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.137024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.137042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.137371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.137381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.137609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.137619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.137945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.137955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.138266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.138278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.138583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.138592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.825 qpair failed and we were unable to recover it. 00:29:21.825 [2024-11-06 11:11:13.138892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.825 [2024-11-06 11:11:13.138902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.139226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.139235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.139529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.139538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.139848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.139858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.140175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.140185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.140490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.140500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.140837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.140848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.141154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.141164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.141425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.141435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.141754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.141764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.142046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.142055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.142378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.142389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.142722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.142732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.142954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.142965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.143272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.143282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.143565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.143575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.143895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.143905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.144236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.144246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.144526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.144535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.144847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.144857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.145185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.145195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.145482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.145492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.145797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.145807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.146119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.146128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.146340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.146349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.146657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.146668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.146959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.146969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.147329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.147629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.147640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.147974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.147984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.148270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.148279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.148607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.148618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.148950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.148961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.149290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.149299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.149606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.149617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.149946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.149957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.150241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.150251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.150573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.826 [2024-11-06 11:11:13.150582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.826 qpair failed and we were unable to recover it. 00:29:21.826 [2024-11-06 11:11:13.150871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.150881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.151166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.151176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.151359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.151369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.151604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.151614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.151805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.151817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.152180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.152190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.152479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.152489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.152763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.152774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.153136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.153147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.153465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.153476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.153757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.153767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.154037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.154046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.154373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.154383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.154654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.154664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.154932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.154942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.155267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.155277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.155570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.155579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.155892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.155902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.156230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.156240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.156515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.156525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.156835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.156845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.157170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.157180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.157478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.157487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.157792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.157803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.158086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.158096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.158372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.158382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.158586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.158596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.158857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.158867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.159158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.159167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.159443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.159452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.159741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.159759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.160089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.160099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.827 qpair failed and we were unable to recover it. 00:29:21.827 [2024-11-06 11:11:13.160467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.827 [2024-11-06 11:11:13.160478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.160790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.160800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.161105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.161115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.161343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.161353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.161693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.161702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.162038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.162048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.162337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.162347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.162574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.162584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.162803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.162814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.162995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.163006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.163282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.163292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.163623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.163632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.163996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.164007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.164200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.164211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.164509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.164519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.164806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.164817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.165127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.165137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.165349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.165359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.165660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.165670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.166005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.166015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.166211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.166221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.166503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.166512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.166768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.166778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.167021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.167034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.167335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.167344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.167537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.167546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.167834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.167844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.168158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.168167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.168458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.168476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.168652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.168662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.168977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.168988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.169295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.169305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.169581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.169592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.169784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.169794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.828 qpair failed and we were unable to recover it. 00:29:21.828 [2024-11-06 11:11:13.170082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.828 [2024-11-06 11:11:13.170092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.170459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.170469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.170664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.170673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.171015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.171026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.171320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.171329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.171647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.171656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.171949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.171959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.172146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.172156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.172337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.172348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.172677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.172688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.172992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.173003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.173322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.173332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.173633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.173644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.173865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.173877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.174183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.174193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.174496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.174507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.174776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.174789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.175111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.175121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.175402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.175418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.175751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.175761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.176050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.176059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.176389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.176398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.176632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.176641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.176865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.176876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.177147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.177157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.177497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.177508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.177818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.177828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.178141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.178151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.178479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.178751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.178760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.179047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.179381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.179392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.179702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.179711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.180023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.180033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.180318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.180328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.180618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.180627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.180960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.181242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.829 [2024-11-06 11:11:13.181252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.829 qpair failed and we were unable to recover it. 00:29:21.829 [2024-11-06 11:11:13.181569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.181579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.181866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.181876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.182219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.182229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.182513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.182522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.182828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.182839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.183159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.183170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.183510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.183520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.183764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.183775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.184132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.184142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.184411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.184420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.184715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.184725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.185043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.185053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.185235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.185246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.185620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.185629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.185966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.185976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.186298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.186307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.186594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.186603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.186884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.186894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.187194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.187204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.187513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.187524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.187805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.187815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.188130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.188140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.188449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.188458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.188755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.188765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.189066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.189076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.830 [2024-11-06 11:11:13.189375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.830 [2024-11-06 11:11:13.189386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.830 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.189563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.189573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.189874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.189884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.190186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.190196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.190466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.190476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.190767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.190776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.191062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.191072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.191359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.191368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.191699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.191709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.192000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.192011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.192225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.192235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.192536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.192546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.192873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.192883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.193169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.193179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.193498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.193508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.193812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.193823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.194148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.194159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.194468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.194478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.194851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.194862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.195133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.195143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.195515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.195526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.195856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.195869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.196201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.196211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.196523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.196533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.196811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.196822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.197183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.197192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.197522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.197531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.197854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.197864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.198207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.198216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.198541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.198550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.198841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.198852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.199171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.199181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.199470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.199479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.199782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.199792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.200095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.200105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.200435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.200444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.200737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.200751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.831 [2024-11-06 11:11:13.201088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.831 [2024-11-06 11:11:13.201099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.831 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.201425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.201435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.201726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.201737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.202050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.202059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.202272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.202281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.202543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.202554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.202827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.202837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.203157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.203167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.203467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.203476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.203748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.203758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.204027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.204037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.204365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.204380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.204711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.204721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.205018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.205028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.205331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.205340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.205649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.205658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.205960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.205970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.206265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.206274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.206611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.206620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.206908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.206919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.207191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.207201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.207488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.207499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.207719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.207730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.208027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.208037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.208356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.208366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.208700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.208709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.208983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.208993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.209285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.209295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.209591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.209601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.209928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.209938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.210217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.210227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.210555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.210565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.210871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.210881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.211180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.211189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.211503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.211514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.211836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.211845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.212166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.212175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.832 qpair failed and we were unable to recover it. 00:29:21.832 [2024-11-06 11:11:13.212508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.832 [2024-11-06 11:11:13.212517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.833 qpair failed and we were unable to recover it. 00:29:21.833 [2024-11-06 11:11:13.212800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.833 [2024-11-06 11:11:13.212812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:21.833 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.213128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.213140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.213450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.213461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.213735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.213744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.214034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.214044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.214323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.214333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.214659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.214669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.214960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.214970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.215243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.215252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.108 qpair failed and we were unable to recover it. 00:29:22.108 [2024-11-06 11:11:13.215552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.108 [2024-11-06 11:11:13.215562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.215882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.215894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.216204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.216215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.216545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.216555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.216868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.216878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.217107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.217116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.217445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.217456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.217786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.217796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.218109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.218118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.218391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.218401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.218739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.218758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.219043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.219052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.219331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.219341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.219668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.219678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.219963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.219973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.220261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.220271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.220561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.220571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.220748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.220759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.221074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.221085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.221392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.221401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.221732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.221742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.221959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.221969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.222284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.222294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.222566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.222576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.222862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.222872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.223084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.223094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.223373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.223383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.223705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.223714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.224028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.224038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.224343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.224353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.224633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.224642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.224845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.224855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.225200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.225210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.225517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.225526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.225805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.225815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.226175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.226185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.226470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.226479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.226743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.226761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.109 [2024-11-06 11:11:13.227081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.109 [2024-11-06 11:11:13.227090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.109 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.227282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.227292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.227668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.227679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.227986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.227996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.228330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.228339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.228659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.228668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.228927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.228937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.229266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.229276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.229567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.229577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.229783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.229793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.230088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.230098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.230425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.230435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.230737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.230749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.231111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.231121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.231427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.231436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.231726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.231735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.232069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.232080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.232410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.232420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.232754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.232764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.233090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.233100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.233410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.233419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.233706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.233717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.234027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.234038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.234336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.234346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.234585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.234595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.234901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.234912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.235242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.235251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.235546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.235555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.235853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.235863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.236177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.236187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.236486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.236497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.236868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.236879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.237178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.237189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.237490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.237501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.237810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.237820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.238138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.238149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.238489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.238499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.238790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.238800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.239127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.110 [2024-11-06 11:11:13.239137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.110 qpair failed and we were unable to recover it. 00:29:22.110 [2024-11-06 11:11:13.239456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.239465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.239744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.239759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.240080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.240090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.240387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.240397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.240732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.240743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.241065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.241076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.241375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.241385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.241720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.241731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.242037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.242048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.242376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.242707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.242717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.243087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.243099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.243407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.243416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.243750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.243760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.244063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.244073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.244366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.244375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.244706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.244716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.245031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.245041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.245375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.245384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.245657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.245667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.245982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.245992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.246293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.246303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.246635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.246645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.246963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.246974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.247294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.247304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.247650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.247660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.247981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.247991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.248283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.248293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.248502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.248512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.248827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.248837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.249034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.249044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.249373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.249383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.249707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.249717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.249935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.249945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.250227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.250237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.250536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.250546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.250851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.250861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.251156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.251167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.111 qpair failed and we were unable to recover it. 00:29:22.111 [2024-11-06 11:11:13.251468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.111 [2024-11-06 11:11:13.251477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.251770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.252078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.252088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.252407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.252417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.252718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.252728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.253042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.253052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.253234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.253243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.253570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.253581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.253793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.253803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.254115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.254125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.254337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.254348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.254642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.254652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.254916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.254928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.255230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.255241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.255570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.255580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.255884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.255894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.256204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.256214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.256497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.256507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.256812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.256823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.257094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.257104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.257422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.257432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.257724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.257734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.258045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.258055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.258396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.258406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.258574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.258585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.258874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.258884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.259212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.259222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.259533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.259542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.259847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.259858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.260146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.260155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.260484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.260493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.260797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.260807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.261103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.261113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.261376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.261385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.261723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.261734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.262676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.262701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.263032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.263044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.263375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.263385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.263718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.112 [2024-11-06 11:11:13.263727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.112 qpair failed and we were unable to recover it. 00:29:22.112 [2024-11-06 11:11:13.263938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.263952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.264270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.264280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.264614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.264623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.264906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.264916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.265238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.265248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.265548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.265558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.265872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.265882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.266209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.266219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.266558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.266567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.266867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.266878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.267213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.267223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.267505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.267515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.267802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.267813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.268154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.268164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.268447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.268457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.268782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.268792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.269116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.269126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.269459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.269468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.269699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.269708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.270028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.270038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.270340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.270349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.270702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.270711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.271036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.271046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.271349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.271359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.271667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.271677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.271873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.271885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.272225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.272235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.272443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.272455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.272552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.272562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.272887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.272897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.113 [2024-11-06 11:11:13.273233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.113 [2024-11-06 11:11:13.273243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.113 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.273537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.273547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.273740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.273755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.274030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.274040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.274389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.274399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.274514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.274524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.274851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.274861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.275153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.275164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.275371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.275380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.275721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.275730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.276043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.276053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.276347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.276357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.276640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.276649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.276855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.276864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.277176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.277186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.277496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.277506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.277714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.277724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.277949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.277960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.278226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.278236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.278550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.278560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.278867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.278877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.279210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.279220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.279412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.279422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.279631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.279640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.280065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.280079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.280296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.280306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.280616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.280626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.280826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.280837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.281141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.281151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.281478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.281488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.281780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.281790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.282121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.282130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.282330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.282340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.282636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.282645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.282947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.282957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.283166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.283460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.283469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.283672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.283682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.284027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.284038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.114 qpair failed and we were unable to recover it. 00:29:22.114 [2024-11-06 11:11:13.284347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.114 [2024-11-06 11:11:13.284357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.284575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.284587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.284811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.284821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.285108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.285119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.285424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.285434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.285724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.285734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.285982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.285993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.286311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.286321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.286623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.286633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.286833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.286844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.287211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.287220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.287528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.287538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.287860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.287871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.288239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.288250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.288589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.288600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.288793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.288805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.289090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.289102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.289420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.289431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.289736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.289750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.290078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.290087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.290426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.290435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.290749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.291047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.291057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.291342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.291351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.291683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.291692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.291999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.292010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.292344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.292354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.292579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.292588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.292803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.292813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.293183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.293193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.293399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.293409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.293714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.293724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.294030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.294041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.294350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.294361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.294680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.294690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.294868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.294878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.295146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.295155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.295468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.295477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.295754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.115 [2024-11-06 11:11:13.295765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.115 qpair failed and we were unable to recover it. 00:29:22.115 [2024-11-06 11:11:13.296060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.296070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.296397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.296407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.296752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.296762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.297052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.297062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.297382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.297391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.297677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.297687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.297989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.297999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.298207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.298217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.298529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.298539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.298879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.298890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.299170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.299181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.299494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.299505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.299834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.299844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.300151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.300161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.300492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.300505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.300820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.300830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.301156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.301166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.301358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.301369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.301559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.301892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.301902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.302196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.302206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.302544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.302554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.302759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.303091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.303367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.303376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.303550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.303559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.303860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.303870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.304188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.304198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.304526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.304535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.304762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.304772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.305097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.305107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.305425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.305435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.305753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.305763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.306065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.306075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.306348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.306358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.306638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.306647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.306968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.306978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.307309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.307318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.307605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.116 [2024-11-06 11:11:13.307614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.116 qpair failed and we were unable to recover it. 00:29:22.116 [2024-11-06 11:11:13.307914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.307924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.308100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.308110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.308325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.308337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.308651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.308662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.308988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.308998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.309331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.309342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.309525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.309535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.309853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.309863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.310192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.310201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.310492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.310502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.310789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.310799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.311104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.311114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.311448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.311458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.311826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.311836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.312146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.312156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.312354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.312363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.312583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.312906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.312916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.313213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.313222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.313528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.313538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.313848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.313858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.314142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.314152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.314485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.314496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.314830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.314841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.315169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.315179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.315473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.315483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.315782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.315792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.316042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.316052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.316250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.316269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.316572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.316581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.316900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.316910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.317186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.317195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.317487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.317496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.317812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.117 [2024-11-06 11:11:13.317822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.117 qpair failed and we were unable to recover it. 00:29:22.117 [2024-11-06 11:11:13.318176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.318186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.318474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.318484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.318772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.318782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.319080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.319090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.319408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.319418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.319725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.319735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.319896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.319909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.320219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.320228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.320520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.320530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.320874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.320884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.321138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.321148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.321463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.321472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.321670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.321680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.322008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.322019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.322289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.322298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.322598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.322608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.322873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.322884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.323159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.323169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.323446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.323457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.323724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.323733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.324044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.324054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.324368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.324378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.324564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.324576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.324753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.324764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.325147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.325157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.325330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.325340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.325621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.325630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.325847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.325857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.326035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.326045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.326364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.326374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.326695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.326705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.327011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.327234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.327245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.327541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.327551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.327874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.327884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.328261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.328271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.328576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.328588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.328899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.328909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.329208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.118 [2024-11-06 11:11:13.329217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.118 qpair failed and we were unable to recover it. 00:29:22.118 [2024-11-06 11:11:13.329501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.329510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.329853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.330057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.330067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.330277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.330287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.330466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.330476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.330662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.330671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.330869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.330880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.331233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.331243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.331535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.331545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.331802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.331812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.331998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.332008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.332305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.332315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.332679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.332689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.333031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.333041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.333432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.333441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.333756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.333766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.334096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.334105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.334403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.334718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.334728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.335041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.335051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.335353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.335363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.335675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.335685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.335982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.335993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.336293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.336303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.336607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.336620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.336957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.336967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.337277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.337287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.337592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.337602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.337710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.337719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.337910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.337920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.338247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.338533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.338544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.338868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.338878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.339183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.339193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.339464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.339474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.339669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.339678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.339971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.339982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.340320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.119 [2024-11-06 11:11:13.340330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.119 qpair failed and we were unable to recover it. 00:29:22.119 [2024-11-06 11:11:13.340620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.340630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.340977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.340987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.341250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.341259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.341456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.341465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.341810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.341820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.342119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.342129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.342405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.342414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.342716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.342726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.342902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.342914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.343221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.343231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.343556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.343566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.343878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.343888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.344226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.344236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.344417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.344429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.344666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.344675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.344982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.344992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.345272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.345282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.345601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.345611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.345908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.345918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.346251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.346262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.346571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.346581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.346887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.346897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.347227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.347237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.347530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.347539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.347864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.348067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.348076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.348345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.348355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.348693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.348702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.349012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.349022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.349326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.349336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.349635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.349644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.349966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.349977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.350277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.350288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.350595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.350605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.350921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.350930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.351266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.351276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.351615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.351624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.351917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.351927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.352247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.120 [2024-11-06 11:11:13.352257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.120 qpair failed and we were unable to recover it. 00:29:22.120 [2024-11-06 11:11:13.352543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.352552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.352867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.352877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.353165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.353175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.353479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.353489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.353780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.353790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.354156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.354166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.354453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.354463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.354788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.354799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.355121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.355131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.355471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.355481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.355811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.355821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.356110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.356120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.356404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.356414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.356728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.356738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.357037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.357048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.357346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.357357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.357686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.357696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.358019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.358030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.358326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.358337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.358640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.358650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.358963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.358974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.359259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.359269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.359595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.359605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.359907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.359917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.360234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.360244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.360532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.360542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.360784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.360794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.361107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.361117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.361441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.361452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.361716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.361726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.362034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.362044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.362365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.362375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.362674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.362684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.362997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.363007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.363309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.363319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.363646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.363656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.363863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.363874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.364193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.121 [2024-11-06 11:11:13.364203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.121 qpair failed and we were unable to recover it. 00:29:22.121 [2024-11-06 11:11:13.364490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.364499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.364829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.364839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.365129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.365139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.365445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.365454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.365741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.365757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.366072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.366081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.366360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.366369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.366693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.366702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.367083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.367093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.367395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.367405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.367682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.367692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.368008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.368019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.368307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.368317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.368613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.368623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.368959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.368969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.369269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.369279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.369549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.369560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.369899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.369910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.370260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.370270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.370603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.370613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.370908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.370918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.371200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.371209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.371481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.371491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.371823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.371833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.372114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.372124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.372401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.372410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.372639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.372649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.372971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.372981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.373274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.373284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.373558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.373568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.373840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.373850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.374164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.374175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.374497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.374507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.122 [2024-11-06 11:11:13.374803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.122 [2024-11-06 11:11:13.374813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.122 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.375134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.375144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.375417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.375427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.375636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.375646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.375960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.375971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.376298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.376308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.376614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.376625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.376995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.377006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.377304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.377314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.377539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.377549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.377879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.377889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.378184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.378194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.378524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.378534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.378819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.378829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.379142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.379151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.379452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.379462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.379761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.379771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.380081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.380091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.380386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.380395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.380663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.380673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.380974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.380985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.381246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.381256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.381528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.381537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.381752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.381763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.382054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.382063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.382374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.382384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.382554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.382564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.382885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.382895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.383058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.383069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.383310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.383320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.383649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.383658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.383957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.383968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.384268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.384278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.384579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.384590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.384911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.384921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.385216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.385226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.385412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.385423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.385776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.385786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.385995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.123 [2024-11-06 11:11:13.386004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.123 qpair failed and we were unable to recover it. 00:29:22.123 [2024-11-06 11:11:13.386313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.386323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.386613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.386623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.386912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.386922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.387124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.387509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.387811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.387821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.388122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.388132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.388428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.388797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.388813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.389117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.389126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.389417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.389427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.389693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.389703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.389990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.390000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.390294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.390303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.390613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.390622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.390949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.390959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.391291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.391301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.391502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.391511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.391707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.391716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.391950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.391960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.392285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.392295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.392580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.392589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.392905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.392915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.393313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.393322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.393604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.393620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.393951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.393961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.394221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.394230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.394556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.394568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.394898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.394908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.395216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.395225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.395507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.395522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.395878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.395888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.396214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.396223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.396526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.396536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.396859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.396869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.397191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.397200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.397503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.397513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.397840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.397850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.124 [2024-11-06 11:11:13.398155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.124 [2024-11-06 11:11:13.398164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.124 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.398476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.398485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.398794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.398804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.399099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.399114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.399434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.399444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.399753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.399763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.400074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.400084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.400266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.400275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.400563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.400573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.400910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.400920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.401109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.401119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.401391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.401401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.401735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.401755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.402046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.402055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.402326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.402335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.402605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.402615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.402908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.402920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.403235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.403244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.403559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.403568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.403835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.403844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.404140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.404150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.404474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.404484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.404803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.404813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.405003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.405014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.405312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.405322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.405531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.405541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.405882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.405892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.406063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.406074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.406338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.406348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.406666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.406675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.407046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.407057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.407234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.407244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.407635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.407645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.407828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.407839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.408174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.408184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.408511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.408521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.408810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.408820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.409145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.409155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.409435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.409445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.125 qpair failed and we were unable to recover it. 00:29:22.125 [2024-11-06 11:11:13.409616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.125 [2024-11-06 11:11:13.409628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.409927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.409937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.410215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.410225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.410491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.410501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.410835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.411071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.411081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.411400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.411409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.411713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.411722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.412043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.412053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.412266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.412277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.412492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.412502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.412830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.412840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.413160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.413170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.413475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.413485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.413817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.413827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.414127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.414136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.414318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.414328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.414631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.414642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.414976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.414987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.415161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.415172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.415338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.415349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.415636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.415646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.415928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.415937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.416239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.416248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.416564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.416574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.416876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.416886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.417164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.417174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.417454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.417464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.417770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.417780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.418098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.418108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.418403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.418412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.418713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.418723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.419066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.419076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.419343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.419352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.419671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.419680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.419976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.419987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.420285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.420294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.420582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.126 [2024-11-06 11:11:13.420592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.126 qpair failed and we were unable to recover it. 00:29:22.126 [2024-11-06 11:11:13.420921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.420931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.421236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.421246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.421549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.421559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.421837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.421847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.422165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.422174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.422473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.422483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.422792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.422803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.423103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.423114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.423415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.423425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.423765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.423776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.424112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.424121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.424407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.424417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.424745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.424761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.425069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.425079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.425375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.425385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.425654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.425664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.425982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.425992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.426156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.426167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.426497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.426506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.426796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.426806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.427086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.427096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.427429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.427440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.427772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.427782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.428035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.428044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.428334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.428344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.428669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.428679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.428967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.428977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.429271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.429281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.429582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.429593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.429901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.429911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.430194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.430204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.430521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.430531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.430829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.431035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.431045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.431372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.127 [2024-11-06 11:11:13.431383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.127 qpair failed and we were unable to recover it. 00:29:22.127 [2024-11-06 11:11:13.431640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.431650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.431942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.431952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.432234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.432243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.432595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.432606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.432928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.432938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.433250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.433260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.433561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.433570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.433844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.433854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.434050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.434059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.434368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.434377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.434653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.434662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.435022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.435032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.435318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.435327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.435599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.435609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.435888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.435898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.436223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.436233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.436527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.436537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.436814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.436824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.437128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.437137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.437409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.437418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.437751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.437761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.438052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.438062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.438347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.438357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.438690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.438701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.439006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.439350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.439359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.439665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.439677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.439997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.440007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.440289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.440299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.440573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.440583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.440883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.440893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.441118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.441128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.441448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.441458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.441764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.441773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.442064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.442073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.442393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.442403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.442706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.442716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.443024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.443034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.128 [2024-11-06 11:11:13.443340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.128 [2024-11-06 11:11:13.443350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.128 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.443649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.443659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.443975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.443985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.444300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.444309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.444613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.444622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.444952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.444962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.445257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.445267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.445592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.445602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.445914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.445924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.446236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.446246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.446544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.446554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.446903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.446914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.447212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.447223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.447563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.447573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.447882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.447892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.448181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.448191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.448518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.448528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.448797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.448808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.448967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.448978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.449319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.449328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.449598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.449608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.449919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.449928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.450204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.450214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.450497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.450506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.450845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.450856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.451136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.451145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.451443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.451453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.451762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.451772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.452080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.452089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.452425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.452435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.452731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.452741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.453061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.453071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.453344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.453354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.453675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.453685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.453972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.453982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.454154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.454165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.454481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.454491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.454794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.454804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.455140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.455150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.129 [2024-11-06 11:11:13.455335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.129 [2024-11-06 11:11:13.455346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.129 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.455662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.455671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.455970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.455980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.456287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.456296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.456566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.456576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.456864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.456874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.457187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.457196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.457519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.457529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.457814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.457825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.458144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.458153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.458445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.458455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.458729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.458739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.458956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.458967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.459237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.459247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.459563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.459573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.459826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.459837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.460192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.460202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.460532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.460544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.460740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.460753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.461096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.461105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.461395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.461405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.461755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.461765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.462056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.462066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.462356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.462365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.462625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.462635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.462851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.462860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.463136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.463145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.463425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.463434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.463632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.463643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.463961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.463971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.464128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.464139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.464512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.464522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.464843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.464853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.465175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.465184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.465470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.465480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.465809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.465819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.466114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.466123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.466318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.466327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.130 [2024-11-06 11:11:13.466623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.130 [2024-11-06 11:11:13.466633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.130 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.466914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.466924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.467236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.467246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.467393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.467404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.467722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.467732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.468129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.468139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.468426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.468438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.468708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.468718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.468946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.468957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.469257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.469267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.469561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.469571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.469784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.469794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.470113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.470123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.470429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.470438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.470803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.470813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.471113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.471123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.471431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.471440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.471737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.471750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.471963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.471973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.472292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.472301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.472627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.472637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.472965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.472976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.473304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.473314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.473619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.473629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.473846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.473856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.474169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.474180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.474485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.474495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.474778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.474788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.474989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.474999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.475324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.475333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.475602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.475611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.475919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.475929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.476224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.476233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.476568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.476580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.131 [2024-11-06 11:11:13.476885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.131 [2024-11-06 11:11:13.476895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.131 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.477175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.477184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.477558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.477568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.477871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.477881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.478205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.478214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.478484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.478493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.478814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.478824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.479164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.479173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.479453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.479463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.479784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.479794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.480081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.480091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.480370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.480673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.480682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.480993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.481004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.481194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.481204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.481374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.481385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.481713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.481723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.482061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.482072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.482281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.482291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.482591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.482600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.482911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.482921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.483246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.483256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.483557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.483566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.483861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.483872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.484181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.484190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.484482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.484491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.484827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.484837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.485123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.485133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.485439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.485448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.485733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.485743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.486059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.486068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.486330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.486339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.486671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.486680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.486977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.486987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.487302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.487312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.487538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.487549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.487883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.487893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.488189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.488200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.488527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.132 [2024-11-06 11:11:13.488537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.132 qpair failed and we were unable to recover it. 00:29:22.132 [2024-11-06 11:11:13.488871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.488881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.489215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.489227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.489507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.489517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.489805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.489815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.490101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.490110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.490443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.490453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.490743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.490756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.491074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.491085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.491422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.491432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.491734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.491745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.492057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.492067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.492389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.492399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.492701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.492711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.493027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.493037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.493365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.493375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.493686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.493696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.494001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.494018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.494343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.494352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.494660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.494670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.494968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.494977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.495266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.495276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.495565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.495574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.495868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.495878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.496166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.496175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.496486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.496496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.496793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.496804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.497140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.497149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.497518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.497529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.497832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.497844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.498129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.498138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.498459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.498469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.498770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.499115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.499124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.499389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.499399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.499591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.499600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.499910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.499920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.500207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.500216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.500519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.500528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.133 [2024-11-06 11:11:13.500814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.133 [2024-11-06 11:11:13.500824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.133 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.501124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.501133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.501409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.501418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.501695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.501704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.502030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.502040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.502325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.502335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.502623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.502633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.502824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.502835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.503167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.503176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.503449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.503459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.503773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.503783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.504072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.504081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.504353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.504363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.504665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.504676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.504964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.504974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.505246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.505255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.505571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.505581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.505862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.505876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.506209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.506219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.506522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.506532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.506806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.506816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.507172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.507182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.507489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.507498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.507834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.507844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.508183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.508194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.508522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.508531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.508846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.509160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.509170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.509500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.509509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.509768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.509778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.510099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.510116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.510438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.510448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.510736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.510749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.511044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.511054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.511382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.511391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.511721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.511731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.512030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.512041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.512348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.512357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.512644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.512654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.134 [2024-11-06 11:11:13.512959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.134 [2024-11-06 11:11:13.512969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.134 qpair failed and we were unable to recover it. 00:29:22.135 [2024-11-06 11:11:13.513259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.135 [2024-11-06 11:11:13.513269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.135 qpair failed and we were unable to recover it. 00:29:22.135 [2024-11-06 11:11:13.513577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.135 [2024-11-06 11:11:13.513587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.135 qpair failed and we were unable to recover it. 00:29:22.135 [2024-11-06 11:11:13.513901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.135 [2024-11-06 11:11:13.513912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.135 qpair failed and we were unable to recover it. 00:29:22.135 [2024-11-06 11:11:13.514241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.135 [2024-11-06 11:11:13.514251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.135 qpair failed and we were unable to recover it. 00:29:22.135 [2024-11-06 11:11:13.514552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.135 [2024-11-06 11:11:13.514561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.135 qpair failed and we were unable to recover it. 00:29:22.135 [2024-11-06 11:11:13.514852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.135 [2024-11-06 11:11:13.514862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.135 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.515177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.515188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.515518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.515529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.515793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.515803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.516002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.516012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.516328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.516337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.516643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.516653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.516867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.516878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.517156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.517166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.517449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.517459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.517754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.517765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.518091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.518101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.518379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.518388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.518713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.518723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.518990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.519000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.519283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.519292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.519621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.519631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.519941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.519951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.520233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.520242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.520524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.520534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.520812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.520822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.521070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.521080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.521346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.521356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.521661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.521670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.521966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.521976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.522289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.522299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.522472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.522484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.522774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.522785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.523104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.523114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.523373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.410 [2024-11-06 11:11:13.523383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.410 qpair failed and we were unable to recover it. 00:29:22.410 [2024-11-06 11:11:13.523716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.523725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.524023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.524033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.524342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.524352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.524692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.524708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.525015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.525025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.525318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.525328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.525666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.525676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.525961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.525971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.526294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.526303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.526592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.526602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.526844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.526857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.527185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.527194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.527411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.527422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.527620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.527630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.527865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.527875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.528171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.528181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.528478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.528487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.528767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.528778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.529084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.529094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.529508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.529518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.529828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.529838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.530026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.530037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.530361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.530371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.530701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.530710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.531030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.531040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.531354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.531364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.531683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.531693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.531988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.531998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.532161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.532172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.532489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.532500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.532695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.532706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.533034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.533044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.533349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.533359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.533650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.411 [2024-11-06 11:11:13.533661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.411 qpair failed and we were unable to recover it. 00:29:22.411 [2024-11-06 11:11:13.533964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.533973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.534300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.534309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.534595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.534606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.534921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.534933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.535254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.535264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.535591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.535601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.535912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.535922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.536218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.536227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.536402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.536413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.536779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.536790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.537119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.537129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.537488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.537498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.537806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.537816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.538027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.538036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.538342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.538351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.538705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.538715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.539064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.539075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.539248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.539259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.539342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.539352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.539656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.539667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.539979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.539990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.540288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.540298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.540629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.540639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.540838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.540849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.541140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.541149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.541356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.541366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.541675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.541685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.542100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.542110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.542317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.542327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.542641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.542650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.542853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.542865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.543193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.543203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.543365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.412 [2024-11-06 11:11:13.543375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.412 qpair failed and we were unable to recover it. 00:29:22.412 [2024-11-06 11:11:13.543705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.543715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.544045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.544055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.544363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.544373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.544660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.544669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.544956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.544967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.545254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.545264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.545594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.545605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.545936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.545947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.546326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.546336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.546672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.546681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.546998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.547007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.547335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.547345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.547618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.547627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.547841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.547851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.548162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.548172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.548481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.548490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.548795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.548805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.549148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.549158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.549441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.549451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.549710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.549720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.550047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.550058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.550338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.550348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.550531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.550542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.550878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.550888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.551220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.551230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.551560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.551879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.551889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.552091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.552101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.552433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.552444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.552652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.552663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.552985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.552996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.553264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.553274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.553586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.553596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.553875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.553885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.413 [2024-11-06 11:11:13.554234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.413 [2024-11-06 11:11:13.554243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.413 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.554553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.554563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.554862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.554872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.555176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.555186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.555445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.555458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.555671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.555681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.555985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.555996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.556296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.556306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.556639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.556650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.556960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.556971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.557305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.557316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.557411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.557421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.557721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.557730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.558045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.558055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.558362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.558371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.558684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.558694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.558998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.559008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.559329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.559339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.559535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.559544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.559843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.559853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.560065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.560075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.560390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.560688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.560697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.561013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.561024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.561310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.561320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.561635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.561645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.561854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.561865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.562154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.562165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.562465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.562475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.562805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.562815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.563114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.563124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.563440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.563451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.563767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.563777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.564086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.564096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.414 [2024-11-06 11:11:13.564410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.414 [2024-11-06 11:11:13.564420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.414 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.564726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.564735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.565037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.565047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.565250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.565260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.565503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.565512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.565851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.565862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.566004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.566013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.566318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.566328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.566490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.566502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.566775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.566785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.567075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.567084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.567363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.567373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.567652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.567662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.567811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.567821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.568158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.568167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.568460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.568470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.568776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.568786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.569104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.569113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.569279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.569289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.569453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.569462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.569751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.569761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.570106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.570116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.570409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.570419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.570692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.570701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.571065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.571078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.571273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.571283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.571573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.571584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.415 qpair failed and we were unable to recover it. 00:29:22.415 [2024-11-06 11:11:13.571937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.415 [2024-11-06 11:11:13.571947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.572238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.572248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.572447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.572457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.572774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.572785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.573112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.573122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.573312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.573322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.573558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.573568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.573849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.573859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.574212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.574222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.574548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.574558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.574823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.574833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.575163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.575173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.575344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.575353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.575647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.575656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.575838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.575848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.576040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.576050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.576243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.576253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.576537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.576546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.576874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.576884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.577053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.577070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.577395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.577405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.577598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.577608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.577924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.577935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.578240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.578249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.578435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.578444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.578809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.578819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.579140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.579150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.579461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.579471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.579794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.579804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.579990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.579999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.580284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.580294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.580588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.580599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.580916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.580926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.581202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.581212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.581548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.581559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.581844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.581853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.416 qpair failed and we were unable to recover it. 00:29:22.416 [2024-11-06 11:11:13.582145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.416 [2024-11-06 11:11:13.582155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.582456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.582467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.582817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.582828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.583144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.583154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.583474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.583484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.583778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.583788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.583979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.583990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.584273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.584283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.584567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.584577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.584711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.584721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.585029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.585040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.585325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.585335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.585499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.585509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.585818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.585828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.586134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.586143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.586454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.586463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.586773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.586783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.587114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.587124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.587443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.587452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.587717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.587726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.588137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.588148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.588471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.588481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.588769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.588780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.589084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.589093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.589280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.589291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.589577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.589588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.589869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.589880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.590054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.590065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.590179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.590189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.417 [2024-11-06 11:11:13.590492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.417 [2024-11-06 11:11:13.590504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.417 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.590816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.590826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.591130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.591140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.591362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.591373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.591689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.591700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.591996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.592006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.592250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.592260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.592609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.592619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.592943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.592954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.593256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.593266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.593577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.593587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.593788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.593797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.594132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.594142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.594456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.594465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.594646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.594657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.594903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.594913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.595233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.595243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.595410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.595420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.595805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.595815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.596132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.596141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.596446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.596455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.596788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.596798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.597086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.597096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.597424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.597434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.597708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.597718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.597985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.597996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.598298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.598309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.598654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.598667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.598979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.598991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.599304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.599314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.599620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.599630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.599961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.599971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.600300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.600309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.418 qpair failed and we were unable to recover it. 00:29:22.418 [2024-11-06 11:11:13.600641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.418 [2024-11-06 11:11:13.600651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.600830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.601145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.601154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.601459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.601468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.601640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.601651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.601988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.602288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.602298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.602487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.602497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.602830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.602840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.603145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.603155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.603419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.603428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.603640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.603650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.604012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.604022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.604344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.604354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.604560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.604570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.604878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.604888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.605192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.605202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.605513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.605523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.605819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.605829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.606116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.606126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.606473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.606483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.606685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.606697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.607023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.607033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.607311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.607321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.607610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.607620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.607921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.607931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.608308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.608317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.608513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.608522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.608818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.608828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.609159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.609169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.419 [2024-11-06 11:11:13.609461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.419 [2024-11-06 11:11:13.609470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.419 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.609787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.609798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.610119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.610129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.610412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.610741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.610763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.611048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.611058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.611352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.611361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.611538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.611547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.611822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.611832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.612167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.612176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.612487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.612496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.612798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.612808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.613115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.613124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.613431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.613440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.613724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.613734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.614041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.614051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.614358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.614367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.614589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.614600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.614898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.614908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.615222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.615232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.615530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.615541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.615849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.615860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.616190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.616201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.616320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.616330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.616711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.616721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.617093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.617102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.617392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.617402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.617753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.617763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.618097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.420 [2024-11-06 11:11:13.618107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.420 qpair failed and we were unable to recover it. 00:29:22.420 [2024-11-06 11:11:13.618414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.618424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.618719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.618728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.619061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.619071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.619234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.619248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.619622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.619632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.619861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.619871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.620111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.620121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.620452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.620461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.620753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.620763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.621060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.621071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.621256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.621268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.621586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.621596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.621926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.621937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.622138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.622148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.622449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.622460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.622759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.622770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.623121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.623131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.623437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.623447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.623776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.623786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.624061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.624070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.624387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.624396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.624688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.624697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.625025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.625035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.625323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.625332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.625515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.625526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.625933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.625943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.626192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.626202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.626406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.421 [2024-11-06 11:11:13.626416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.421 qpair failed and we were unable to recover it. 00:29:22.421 [2024-11-06 11:11:13.626724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.626734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.627049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.627059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.627251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.627266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.627462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.627472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.627796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.627807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.628088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.628098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.628315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.628326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.628637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.628647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.628940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.628950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.629306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.629317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.629620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.629630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.629826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.629837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.630051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.630062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.630323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.630334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.630517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.630527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.630857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.630867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.631162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.631171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.631491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.631500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.631790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.631801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.632004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.632014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.632332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.632342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.632615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.632624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.632910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.632920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.633218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.633227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.633541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.633550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.633863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.633873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.634202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.634211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.634497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.634506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.634811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.634820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.635127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.635140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.635439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.635450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.635780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.635790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.636098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.636108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.636394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.636404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.636723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.636732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.636939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.636949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.637260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.422 [2024-11-06 11:11:13.637271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.422 qpair failed and we were unable to recover it. 00:29:22.422 [2024-11-06 11:11:13.637562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.637571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.637876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.637886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.638142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.638153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.638447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.638457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.638763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.638773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.639111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.639121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.639420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.639431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.639766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.639777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.640083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.640092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.640383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.640392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.640711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.640720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.641011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.641021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.641349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.641358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.641672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.641681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.641970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.641980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.642302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.642312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.642610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.642620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.642986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.642996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.643283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.643292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.643600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.643609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.643815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.643826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.644151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.644160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.644488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.644498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.644758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.644769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.645078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.645087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.645375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.645384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.645684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.645693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.645982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.645992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.646197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.646207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.646496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.646507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.646844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.646854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.647180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.647189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.647457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.647467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.647809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.647820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.423 [2024-11-06 11:11:13.648028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.423 [2024-11-06 11:11:13.648037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.423 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.648404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.648414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.648743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.648760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.649039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.649050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.649389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.649399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.649697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.649707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.650009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.650020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.650310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.650321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.650629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.650639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.650966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.650976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.651304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.651315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.651611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.651621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.651916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.651927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.652213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.652224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.652514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.652525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.652824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.652835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.653149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.653158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.653486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.653496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.653799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.653809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.654094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.654104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.654376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.654386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.654695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.654705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.654999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.655009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.655360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.655369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.655681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.655690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.655978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.655988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.656230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.656241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.656532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.656541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.656819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.656830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.657137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.657147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.657445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.657455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.657785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.657795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.658160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.658171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.658474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.658483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.658767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.658777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.659046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.659055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.659346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.659355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.659687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.659696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.659983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.659993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.424 qpair failed and we were unable to recover it. 00:29:22.424 [2024-11-06 11:11:13.660266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.424 [2024-11-06 11:11:13.660275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.660570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.660580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.660876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.660886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.661263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.661274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.661602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.661612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.661917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.661927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.662207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.662217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.662506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.662515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.662833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.662843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.663131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.663140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.663470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.663479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.663784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.663794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.664083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.664093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.664379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.664389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.664710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.664722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.665029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.665039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.665307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.665318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.665543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.665554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.665854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.665865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.666196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.666206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.666512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.666521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.666781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.666792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.667103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.667112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.667378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.667388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.667658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.667668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.667970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.667981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.668252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.668262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.668423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.668434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.668758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.668769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.669097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.669106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.669404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.669413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.669725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.669734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.670048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.670059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.425 [2024-11-06 11:11:13.670375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.425 [2024-11-06 11:11:13.670384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.425 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.670690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.670699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.671001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.671011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.671341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.671351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.671524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.671534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.671847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.671857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.672148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.672158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.672435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.672445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.672776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.672786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.673117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.673127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.673399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.673409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.673741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.673757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.674079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.674089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.674393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.674403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.674735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.674744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.674991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.675001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.675330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.675340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.675633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.675642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.675926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.675936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.676239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.676249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.676567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.676577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.676867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.676877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.677178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.677188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.677448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.677457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.677784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.677795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.678120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.678129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.678501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.678511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.678842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.678852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.679133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.679143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.679466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.679475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.679763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.679774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.680075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.680084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.680372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.680381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.680684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.680693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.681058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.681069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.681372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.681381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.681675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.681684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.681991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.682001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.682265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.682274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.426 [2024-11-06 11:11:13.682601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.426 [2024-11-06 11:11:13.682611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.426 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.682910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.682920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.683239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.683248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.683541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.683550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.683911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.683921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.684227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.684236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.684537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.684546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.684861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.684871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.685068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.685077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.685405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.685414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.685720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.685733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.686082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.686092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.686382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.686391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.686720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.686731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.687038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.687047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.687322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.687331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.687619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.687629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.687969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.687979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.688278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.688287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.688478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.688489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.688801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.688811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.689122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.689132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.689335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.689344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.689656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.689665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.689997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.690007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.690286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.690295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.690605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.690615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.690946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.690956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.691239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.691249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.691453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.691463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.691767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.691777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.692061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.692071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.692389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.692398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.692685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.692695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.427 qpair failed and we were unable to recover it. 00:29:22.427 [2024-11-06 11:11:13.692997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.427 [2024-11-06 11:11:13.693006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.693321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.693331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.693645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.693654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.693954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.693966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.694276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.694286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.694578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.694587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.694917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.694927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.695109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.695119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.695309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.695319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.695505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.695514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.695829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.695839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.696202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.696212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.696500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.696510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.696812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.696822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.697237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.697251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.697574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.697586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.697889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.697899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.698236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.698245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.698579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.698589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.698893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.698903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.699210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.699220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.699493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.699503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.699801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.699811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.700123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.700132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.700416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.700425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.700726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.700736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.701035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.701045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.701331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.701341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.701546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.701555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.701896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.701906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.702205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.702217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.702435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.702444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.702775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.702785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.702997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.703006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.703339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.703349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.703598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.703608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.703859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.703870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.704074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.704085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.428 [2024-11-06 11:11:13.704365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.428 [2024-11-06 11:11:13.704376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.428 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.704708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.704719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.705026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.705035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.705356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.705366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.705646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.705655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.705925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.705935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.706255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.706265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.706550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.706559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.706857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.706867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.707083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.707093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.707194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.707204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.707462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.707472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.707809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.707819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.708107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.708116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.708445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.708455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.708621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.708631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.708936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.708947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.709129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.709141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.709477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.709486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.709824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.709834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.710145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.710155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.710414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.710424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.710708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.710718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.711031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.711041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.711359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.711370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.711705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.711715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.711860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.711871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.712204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.712215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.712549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.712559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.712857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.712867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.713187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.713196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.713530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.713539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.713760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.713770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.713942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.713953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.714127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.714137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.714324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.714334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.714533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.714542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.714852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.714862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.715128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.715138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.429 [2024-11-06 11:11:13.715476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.429 [2024-11-06 11:11:13.715486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.429 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.715812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.715822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.716190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.716199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.716406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.716415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.716728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.716738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.717081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.717091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.717421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.717431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.717701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.717711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.717978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.717988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.718293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.718302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.718634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.718644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.718946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.718956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.719241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.719251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.719526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.719536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.719853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.719864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.720184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.720194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.720522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.720533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.720857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.720868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.721178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.721187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.721479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.721489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.721838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.721848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.722141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.722153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.722474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.722484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.722812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.722822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.723032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.723041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.723331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.723341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.723663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.723673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.723985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.723995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.724327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.724336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.724706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.724717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.724991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.725001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.725279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.725289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.725566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.725575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.725898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.725908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.726239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.726248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.726536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.726546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.726843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.726853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.727153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.727163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.727473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.727483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.430 [2024-11-06 11:11:13.727816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.430 [2024-11-06 11:11:13.727826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.430 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.728155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.728165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.728471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.728480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.728782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.728792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.728994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.729004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.729334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.729343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.729638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.729648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.729955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.729965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.730295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.730305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.730633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.730645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.730974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.730984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.731287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.731297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.731587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.731597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.731877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.731887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.732148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.732157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.732444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.732455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.732627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.732639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.732965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.732975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.733281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.733290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.733618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.733628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.733917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.733928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.734248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.734258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.734586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.734597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.734904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.734915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.735208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.735217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.735490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.735499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.735768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.735779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.736067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.736076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.736242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.736252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.736635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.736644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.736922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.736932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.737224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.737234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.737564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.737574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.737873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.737883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.738214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.738224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.738535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.738544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.738834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.738844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.739167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.739176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.739446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.739456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.431 [2024-11-06 11:11:13.739769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.431 [2024-11-06 11:11:13.739779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.431 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.740070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.740079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.740395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.740405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.740693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.740702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.740959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.740969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.741280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.741290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.741591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.741601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.741898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.741908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.742225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.742235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.742521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.742530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.742728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.742738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.743037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.743048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.743354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.743373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.743692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.743701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.743989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.743999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.744305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.744315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.744619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.744629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.744940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.744950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.745298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.745308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.745618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.745628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.745949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.745959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.746245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.746255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.746564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.746574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.746868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.746878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.747181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.747191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.747528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.747538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.747830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.747840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.748101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.748111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.748395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.748405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.748722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.748731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.748919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.748929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.749226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.749236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.749546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.749556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.749847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.749857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.432 qpair failed and we were unable to recover it. 00:29:22.432 [2024-11-06 11:11:13.750181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.432 [2024-11-06 11:11:13.750190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.750492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.750501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.750801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.750811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.751141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.751150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.751455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.751469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.751763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.751774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.752071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.752081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.752389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.752398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.752699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.752708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.753034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.753054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.753375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.753384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.753672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.753681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.753965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.753975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.754306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.754315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.754680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.754690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.754983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.754994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.755362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.755372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.755672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.755682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.756034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.756045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.756373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.756384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.756695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.757031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.757042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.757209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.757221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.757508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.757519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.757712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.757723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.757936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.757947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.758147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.758158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.758322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.758332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.758651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.758661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.758973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.758984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.759318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.759329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.759632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.759645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.759977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.759988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.760216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.760226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.760520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.760530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.760713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.760725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.761086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.761097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.761398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.761409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.761569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.433 [2024-11-06 11:11:13.761580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.433 qpair failed and we were unable to recover it. 00:29:22.433 [2024-11-06 11:11:13.761768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.761780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.762043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.762052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.762335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.762344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.762530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.762540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.762717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.762729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.762948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.762958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.763195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.763205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.763512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.763522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.763850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.763860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.764159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.764168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.764478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.764488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.764684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.764694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.765034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.765044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.765204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.765215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.765436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.765446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.765734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.765743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.766069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.766079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.766391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.766400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.766682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.766692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.767013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.767026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.767314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.767323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.767594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.767604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.767901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.767911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.768239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.768249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.768530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.768540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.768841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.768851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.769144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.769154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.769375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.769385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.769576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.769585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.769901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.769911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.770207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.770216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.770523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.770533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.770855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.770865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.771146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.771158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.771484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.771494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.771708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.771719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.772030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.772042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.772362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.772372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.772699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.434 [2024-11-06 11:11:13.772710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.434 qpair failed and we were unable to recover it. 00:29:22.434 [2024-11-06 11:11:13.773009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.773020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.773352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.773363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.773695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.773705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.773986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.773998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.774301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.774311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.774615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.774625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.774952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.774963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.775222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.775232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.775537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.775547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.775767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.775778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.776132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.776143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.776441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.776452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.776752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.776768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.777044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.777360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.777371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.777685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.777695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.778055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.778065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.778362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.778372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.778638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.778648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.778845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.778854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.779129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.779139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.779421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.779431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.779695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.779704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.780020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.780031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.780311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.780321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.780654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.780664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.780970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.780980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.781263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.781273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.781548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.781558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.781884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.781894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.782126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.782135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.782418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.782428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.782683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.782694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.783006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.783017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.783342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.783353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.783657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.783668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.784069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.784080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.784384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.784727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.435 [2024-11-06 11:11:13.784737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.435 qpair failed and we were unable to recover it. 00:29:22.435 [2024-11-06 11:11:13.785087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.785098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.785280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.785289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.785602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.785612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.785857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.785867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.786223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.786233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.786523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.786532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.786799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.786809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.786942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.786952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.787232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.787241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.787605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.787617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.788013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.788023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.788292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.788302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.788574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.788584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.788858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.788869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.789198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.789208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.789508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.789518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.789728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.789738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.790093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.790103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.790378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.790389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.790711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.790721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.791041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.791051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.791349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.791359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.791692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.791702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.792020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.792031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.792351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.792361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.792449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.792458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.792673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.792684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.792977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.792989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.793284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.793294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.793603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.793614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.793814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.793824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.436 [2024-11-06 11:11:13.794142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.436 [2024-11-06 11:11:13.794152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.436 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.794388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.794398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.794701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.794711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.794932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.794942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.795271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.795281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.795588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.795599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.795911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.795921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.796142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.796152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.796492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.796502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.796803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.796813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.797094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.797104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.797312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.797332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.797615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.797625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.797944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.797955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.798321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.798332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.798651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.798661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.798967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.798977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.799182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.799192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.799530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.799540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.799855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.799866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.800065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.800075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.800379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.800388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.800578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.800588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.800785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.800796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.801154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.801163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.801509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.801519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.801811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.801821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.802126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.802136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.802466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.802476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.802774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.802784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.803118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.803128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.803320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.803329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.803639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.803649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.803953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.803963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.804275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.804285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.804576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.804586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.804881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.437 [2024-11-06 11:11:13.804891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.437 qpair failed and we were unable to recover it. 00:29:22.437 [2024-11-06 11:11:13.805087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.805097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.805289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.805299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.805500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.805510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.805834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.805845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.806174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.806184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.806499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.806509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.806679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.806689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.807028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.807039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.807366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.807376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.807674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.807684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.807960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.807971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.808302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.808313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.808623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.808633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.808800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.808810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.809081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.809091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.809349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.809358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.809548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.809557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.809936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.809946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.810260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.810269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.810440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.810450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.810618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.810629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.810967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.810977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.811273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.811282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.811613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.811623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.811988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.811999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.812069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.812078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.812409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.812420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.812741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.812755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.813059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.813069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.813353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.813363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.813645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.813654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.814059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.814069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.814241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.814251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.814547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.814557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.814762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.814773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.814964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.814974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.815166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.815178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.815382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.815392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.438 [2024-11-06 11:11:13.815588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.438 [2024-11-06 11:11:13.815599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.438 qpair failed and we were unable to recover it. 00:29:22.439 [2024-11-06 11:11:13.815908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.439 [2024-11-06 11:11:13.815918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.439 qpair failed and we were unable to recover it. 00:29:22.439 [2024-11-06 11:11:13.816251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.439 [2024-11-06 11:11:13.816260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.439 qpair failed and we were unable to recover it. 00:29:22.439 [2024-11-06 11:11:13.816525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.439 [2024-11-06 11:11:13.816534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.439 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.816863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.816875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.817042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.817051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.817229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.817238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.817531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.817541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.817830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.817842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.818124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.818136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.818469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.818479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.818800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.818811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.819008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.819018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.819314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.819325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.819660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.819672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.820008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.820020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.820228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.820239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.820407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.820418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.820717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.820728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.821039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.821050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.821441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.821451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.821762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.821773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.822097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.822107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.713 [2024-11-06 11:11:13.822439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.713 [2024-11-06 11:11:13.822450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.713 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.822760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.822772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.823081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.823094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.823405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.823416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.823720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.823731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.824064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.824075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.824401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.824411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.824735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.824750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.825134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.825145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.825447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.825457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.825769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.825780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.826118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.826129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.826433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.826444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.826775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.826786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.827086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.827097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.827401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.827412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.827717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.827728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.828038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.828050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.828356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.828368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.828710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.828721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.829036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.829049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.829377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.829388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.829696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.829706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.830032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.830044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.830364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.830375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.830724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.830736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.831043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.831055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.831377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.831388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.831701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.831712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.831905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.831918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.832253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.832264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.832564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.832574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.832900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.832911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.833311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.833322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.833635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.833645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.834038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.834050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.834356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.834367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.834696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.834707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.835021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.714 [2024-11-06 11:11:13.835033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.714 qpair failed and we were unable to recover it. 00:29:22.714 [2024-11-06 11:11:13.835399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.835411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.835725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.835736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.835911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.835923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.836269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.836280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.836612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.836623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.836925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.836937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.837255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.837266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.837448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.837459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.837646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.837658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.838048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.838059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.838354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.838364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.838530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.838541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.838847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.838858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.839150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.839161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.839455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.839466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.839772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.839783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.840088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.840098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.840408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.840419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.840720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.840731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.841051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.841064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.841374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.841384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.841691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.841702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.841974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.841985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.842334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.842345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.842604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.842614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.842912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.842923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.843227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.843237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.843563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.843574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.843879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.843890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.844174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.844185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.844494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.844505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.844814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.844826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.845122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.845132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.845432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.845442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.845754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.845766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.846099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.846112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.846442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.846452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.846612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.846623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.846932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.715 [2024-11-06 11:11:13.846943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.715 qpair failed and we were unable to recover it. 00:29:22.715 [2024-11-06 11:11:13.847261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.847573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.847583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.847878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.847889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.848231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.848242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.848550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.848562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.848943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.848954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.849138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.849149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.849429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.849440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.849771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.849782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.850102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.850113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.850419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.850429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.850619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.850629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.850952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.850963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.851288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.851299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.851630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.851642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.851978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.851989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.852300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.852311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.852636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.852647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.853010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.853022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.853320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.853333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.853522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.853533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.853854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.853865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.854180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.854192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.854486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.854498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.854812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.854824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.855131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.855141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.855428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.855439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.855729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.855740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.856063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.856074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.856350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.856361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.856658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.856670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.856980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.856992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.857318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.857330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.857632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.857644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.857806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.857819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.858149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.858161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.858469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.858481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.858789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.716 [2024-11-06 11:11:13.858801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.716 qpair failed and we were unable to recover it. 00:29:22.716 [2024-11-06 11:11:13.859105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.859117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.859379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.859392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.859709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.859721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.860010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.860022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.860199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.860212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.860521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.860533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.860806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.860818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.861125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.861136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.861464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.861476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.861809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.861821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.862071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.862081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.862355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.862365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.862575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.862586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.862916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.862927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.863226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.863236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.863529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.863540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.863773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.863784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.864083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.864096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.864429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.864441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.864752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.864763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.865097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.865108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.865384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.865707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.865980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.865991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.866319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.866331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.866658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.866668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.866987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.867007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.867330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.867341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.867616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.867626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.867950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.868295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.868306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.868597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.868608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.868910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.868921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.869206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.869217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.869516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.869527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.869827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.717 [2024-11-06 11:11:13.869838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.717 qpair failed and we were unable to recover it. 00:29:22.717 [2024-11-06 11:11:13.870173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.870184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.870490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.870501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.870769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.870781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.871083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.871094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.871403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.871414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.871731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.871741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.872029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.872040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.872328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.872339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.872646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.872657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.872964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.872975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.873153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.873165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.873481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.873493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.873833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.873844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.874176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.874189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.874498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.874509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.874812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.874823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.875134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.875146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.875436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.875447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.875779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.875790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.876092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.876103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.876408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.876418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.876749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.876761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.877058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.877069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.877370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.877380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.877671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.877682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.877988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.878000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.878299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.878310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.878633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.878645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.878967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.878978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.879286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.879296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.879629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.879641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.879966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.879978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.880283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.880295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.880620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.880631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.880907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.880919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.881240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.718 [2024-11-06 11:11:13.881251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.718 qpair failed and we were unable to recover it. 00:29:22.718 [2024-11-06 11:11:13.881582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.881593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.881884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.881895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.882199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.882209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.882546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.882557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.882875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.882889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.883205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.883216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.883502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.883513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.883819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.883831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.884128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.884139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.884456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.884466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.884771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.884782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.885074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.885085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.885439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.885450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.885753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.885766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.886090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.886101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.886436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.886447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.886749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.886761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.887043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.887054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.887393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.887733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.887745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.888076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.888088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.888404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.888416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.888621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.888631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.888948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.888959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.889243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.889254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.889587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.889598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.889906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.889917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.890248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.890260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.890559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.890570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.890870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.890882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.891170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.891181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.891486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.891499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.891807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.891818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.892151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.892163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.892479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.892490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.892800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.892812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.893112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.893123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.893427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.893438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.719 qpair failed and we were unable to recover it. 00:29:22.719 [2024-11-06 11:11:13.893730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.719 [2024-11-06 11:11:13.893741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.894056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.894067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.894383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.894395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.894723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.894734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.895054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.895066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.895358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.895371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.895692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.895704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.896015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.896028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.896359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.896370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.896682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.896694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.897007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.897020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.897343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.897355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.897661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.897672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.897969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.897981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.898326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.898339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.898609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.898621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.898924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.898937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.899249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.899261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.899574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.899585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.899921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.899933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.900233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.900246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.900614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.900626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.900956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.900969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.901298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.901310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.901633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.901645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.901957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.901969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.902292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.902304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.902595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.902606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.902924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.902937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.903124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.903137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.903461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.903473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.903801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.903813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.904128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.904138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.904438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.904448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.904785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.904797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.905117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.905128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.905453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.905463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.905767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.905778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.720 [2024-11-06 11:11:13.906057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.720 [2024-11-06 11:11:13.906069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.720 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.906366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.906377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.906655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.906666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.906838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.906850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.907200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.907211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.907477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.907488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.907824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.907835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.908175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.908187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.908515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.908526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.908835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.908846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.909167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.909178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.909508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.909520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.909825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.909836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.910143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.910153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.910485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.910495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.910797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.910808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.911089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.911100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.911376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.911387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.911695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.911705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.912023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.912034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.912371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.912381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.912678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.912689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.913018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.913030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.913327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.913340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.913656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.913668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.913979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.913992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.914294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.914306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.914627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.914639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.914984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.914996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.915177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.915189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.915529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.915540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.915867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.915878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.916106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.916117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.916441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.916452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.916755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.916766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.917047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.917058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.917356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.917368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.917703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.917714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.918040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.918053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.721 [2024-11-06 11:11:13.918354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.721 [2024-11-06 11:11:13.918365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.721 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.918663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.918675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.918857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.918868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.919188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.919198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.919523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.919534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.919761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.919773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.920103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.920114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.920429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.920442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.920649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.920661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.921019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.921030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.921333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.921344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.921677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.921690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.921984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.921995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.922308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.922319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.922620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.922633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.923030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.923042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.923350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.923361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.923671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.924037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.924048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.924359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.924369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.924640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.924651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.924939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.924950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.925267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.925278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.925509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.925520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.925825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.925836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.926026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.926038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.926318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.926329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.926630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.926641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.926824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.926835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.927204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.927214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.927519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.927530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.927820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.927830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.928164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.928175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.928401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.928412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.928739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.722 [2024-11-06 11:11:13.928753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.722 qpair failed and we were unable to recover it. 00:29:22.722 [2024-11-06 11:11:13.929097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.929107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.929421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.929433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.929775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.929786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.930100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.930115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.930442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.930453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.930668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.930678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.931070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.931081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.931381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.931392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.931694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.931704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.932011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.932022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.932324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.932336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.932574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.932586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.932888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.932900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.933180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.933192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.933510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.933522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.933820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.933831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.934146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.934159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.934484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.934495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.934661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.934962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.934975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.935291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.935302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.935589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.935600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.935912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.935924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.936227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.936237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.936604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.936615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.936831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.936842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.937079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.937090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.937383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.937394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.937612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.937623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.937941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.937952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.938151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.938161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.938441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.938452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.938743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.938758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.939068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.939080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.939381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.939393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.939707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.939718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.940047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.940059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.940245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.940256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.940575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.723 [2024-11-06 11:11:13.940586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.723 qpair failed and we were unable to recover it. 00:29:22.723 [2024-11-06 11:11:13.940848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.940859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.941212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.941223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.941532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.941543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.941822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.941833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.942151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.942162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.942467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.942478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.942654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.942665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.942934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.942945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.943270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.943281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.943470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.943480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.943791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.943803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.944115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.944125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.944406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.944416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.944759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.944779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.945114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.945125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.945461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.945473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.945692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.945703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.945891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.945902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.946230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.946241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.946550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.946561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.946898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.946910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.947214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.947224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.947531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.947542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.947875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.947887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.948204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.948215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.948395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.948406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.948728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.948740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.948943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.948954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.949261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.949272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.949496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.949507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.949836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.949847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.950025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.950037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.950371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.950384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.950707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.950719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.951000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.951011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.951290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.951301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.951494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.951505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.951809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.951820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.724 [2024-11-06 11:11:13.952155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.724 [2024-11-06 11:11:13.952166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.724 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.952447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.952458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.952637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.952648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.953010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.953022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.953317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.953328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.953621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.953632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.953935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.953947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.954248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.954259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.954519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.954530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.954848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.954859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.955047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.955059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.955390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.955401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.955702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.955713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.956024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.956035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.956377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.956389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.956714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.956724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.957036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.957047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.957375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.957386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.957688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.957699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.957985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.957997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.958302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.958312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.958645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.958659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.958829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.958841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.959043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.959054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.959322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.959333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.959630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.959640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.959947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.959958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.960269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.960280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.960584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.960595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.960943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.960955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.961280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.961290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.961590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.961601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.961872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.961883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.962196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.962207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.962535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.962546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.962861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.962872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.963193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.963203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.963537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.963548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.963840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.963851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.725 [2024-11-06 11:11:13.964172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.725 [2024-11-06 11:11:13.964182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.725 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.964380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.964391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.964669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.964680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.965035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.965047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.965328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.965339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.965645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.965655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.965968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.965980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.966170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.966180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.966467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.966478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.966782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.966795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.967080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.967090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.967392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.967402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.967692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.967703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.968031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.968043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.968345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.968358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.968686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.968698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.969000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.969012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.969315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.969326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.969643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.969655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.969940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.969952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.970250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.970260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.970586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.970598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.970908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.970919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.971238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.971250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.971549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.971560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.971869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.971880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.972221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.972232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.972500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.972510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.972802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.972813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.973118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.973130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.973426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.973437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.973757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.973769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.974083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.974094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.974396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.974416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.974741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.974757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.975045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.975056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.975360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.975371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.975704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.975715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.976006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.976018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.726 qpair failed and we were unable to recover it. 00:29:22.726 [2024-11-06 11:11:13.976326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.726 [2024-11-06 11:11:13.976338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.976666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.976677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.976974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.976986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.977302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.977313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.977594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.977605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.977906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.977917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.978215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.978227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.978562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.978573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.978790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.978801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.979112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.979123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.979423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.979434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.979738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.979759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.980112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.980123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.980432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.980443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.980770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.980782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.981087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.981098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.981427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.981438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.981742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.981758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.982082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.982092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.982398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.982411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.982695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.982706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.983004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.983017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.983333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.983344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.983634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.983644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.983972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.983983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.984315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.984327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.984632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.984643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.984966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.984979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.985309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.985320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.985628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.985639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.985967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.985979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.986313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.986324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.986668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.986680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.986991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.987002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.987312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.727 [2024-11-06 11:11:13.987324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.727 qpair failed and we were unable to recover it. 00:29:22.727 [2024-11-06 11:11:13.987631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.987642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.987965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.987977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.988309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.988321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.988645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.988659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.988981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.988993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.989327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.989339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.989666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.989678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.989995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.990007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.990340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.990352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.990679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.990691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.990986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.990998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.991309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.991321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.991615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.991627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.991972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.991984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.992351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.992363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.992687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.992699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.993017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.993029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.993359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.993371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.993697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.993709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.994024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.994036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.994368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.994380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.994690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.994702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.994955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.994967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.995305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.995317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.995614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.995626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.995799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.995812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.996164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.996175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.996486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.996496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.996789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.996800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.997135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.997147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.997448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.997461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.997756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.997767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.998069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.998079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.998372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.998382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.998721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.998732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.999034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.999045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.999355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.999366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:13.999738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:13.999752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:14.000105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.728 [2024-11-06 11:11:14.000116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.728 qpair failed and we were unable to recover it. 00:29:22.728 [2024-11-06 11:11:14.000421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.000432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.000737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.000755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.001073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.001084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.001389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.001400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.001563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.001576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.001910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.001922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.002231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.002242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.002535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.002547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.002854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.002865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.003077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.003088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.003452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.003464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.003794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.003806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.004124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.004135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.004431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.004441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.004640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.004652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.004969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.004980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.005300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.005311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.005641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.005653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.005953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.005965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.006317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.006328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.006660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.006671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.006975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.006987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.007285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.007296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.007593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.007605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.007923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.007934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.008256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.008266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.008574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.008585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.008903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.008914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.009197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.009208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.009482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.009494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.009827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.009840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.010021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.010032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.010319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.010330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.010655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.010666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.010981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.010992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.011297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.011308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.011674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.011686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.011850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.011863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.012134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.729 [2024-11-06 11:11:14.012145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.729 qpair failed and we were unable to recover it. 00:29:22.729 [2024-11-06 11:11:14.012527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.012539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.012858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.012870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.013077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.013088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.013374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.013385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.013734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.013750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.014060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.014071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.014371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.014382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.014717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.014728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.015079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.015092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.015398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.015690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.015701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.016031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.016043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.016368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.016379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.016710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.016720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.017039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.017051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.017359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.017369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.017661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.017672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.018025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.018037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.018365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.018376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.018585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.018596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.018920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.018934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.019287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.019298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.019484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.019496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.019807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.019818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.020139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.020151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.020506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.020517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.020786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.020798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.021091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.021102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.021394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.021405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.021682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.021693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.021994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.022006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.022320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.022331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.022635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.022646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.022977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.022988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.023300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.023311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.023613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.023624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.023970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.023982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.024302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.024313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.024636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.730 [2024-11-06 11:11:14.024647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.730 qpair failed and we were unable to recover it. 00:29:22.730 [2024-11-06 11:11:14.024929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.024941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.025247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.025258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.025547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.025557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.025874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.025885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.026190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.026200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.026412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.026423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.026744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.026766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.027097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.027108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.027418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.027430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.027724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.027735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.028084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.028096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.028410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.028421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.028732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.028742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.029165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.029176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.029491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.029502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.029813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.029825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.030013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.030025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.030303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.030314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.030647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.030658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.030965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.030977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.031289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.031594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.031606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.031799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.031810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.032085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.032097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.032272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.032284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.032616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.032627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.032966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.032977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.033328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.033339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.033559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.033571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.033858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.033870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.034201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.034211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.034519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.034530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.034702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.731 [2024-11-06 11:11:14.035105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.731 [2024-11-06 11:11:14.035117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.731 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.035531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.035542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.035889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.035903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.036195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.036206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.036523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.036534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.036752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.036763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.037074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.037084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.037275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.037288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.037599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.037611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.037916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.037927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.038234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.038245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.038538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.038549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.038866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.038877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.039160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.039171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.039464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.039475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.039645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.039656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.039742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.039763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.040042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.040054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.040356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.040366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.040676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.040687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.041020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.041032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.041317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.041329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.041520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.041534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.041848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.041860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.042193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.042204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.042552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.042563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.042865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.042876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.043200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.043211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.043516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.043527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.043865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.043876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.044195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.044207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.044515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.044527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.044794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.044805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.045126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.045138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.045449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.045459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.045655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.045667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.046012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.046024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.046341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.046352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.046681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.046692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.732 [2024-11-06 11:11:14.046989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.732 [2024-11-06 11:11:14.047000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.732 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.047326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.047337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.047676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.047687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.048010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.048021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.048421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.048432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.048735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.048750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.049042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.049053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.049403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.049414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.049723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.049734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.050047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.050059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.050245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.050255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.050412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.050423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.050819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.050830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.051128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.051139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.051450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.051461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.051772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.051783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.052114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.052473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.052485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.052822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.052834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.053134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.053146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.053454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.053465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.053773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.053784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.054107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.054118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.054422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.054432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.054758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.054770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.054958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.054970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.055270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.055281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.055588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.055599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.055894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.055906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.056272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.056283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.056576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.056587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.056893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.056907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.057242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.057253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.057559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.057571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.057872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.057885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.058252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.058263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.058571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.058582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.058977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.058988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.059284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.059296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.733 qpair failed and we were unable to recover it. 00:29:22.733 [2024-11-06 11:11:14.059603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.733 [2024-11-06 11:11:14.059613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.059912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.059923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.060245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.060256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.060586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.060596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.060902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.060913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.061116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.061128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.061479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.061489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.061824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.061836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.062215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.062226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.062542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.062553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.062878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.062889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.063231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.063242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.063555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.063566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.063860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.063871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.064184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.064195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.064529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.064540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.064867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.064877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.065181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.065192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.065500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.065512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.065727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.065742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.066029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.066039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.066273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.066284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.066495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.066506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.066803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.066815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.067019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.067030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.067258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.067269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.067584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.067595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.067926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.068237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.068248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.068575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.068586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.068888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.068900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.069189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.069200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.069527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.069539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.069850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.069862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.070167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.070179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.070493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.070504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.070654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.070664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.070972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.070983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.071299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.734 [2024-11-06 11:11:14.071309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.734 qpair failed and we were unable to recover it. 00:29:22.734 [2024-11-06 11:11:14.071511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.071522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.071843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.071854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.072180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.072191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.072561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.072572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.072911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.072922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.073215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.073227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.073536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.073547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.073859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.073870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.074185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.074196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.074523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.074534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.074805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.074816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.075031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.075042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.075346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.075356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.075663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.075674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.075989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.076000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.076311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.076321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.076537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.076547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.076726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.076736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.077019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.077031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.077331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.077341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.077526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.077538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.077841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.077852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.078092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.078102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.078420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.078431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.078738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.078753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.078932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.078942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.079223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.079234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.079546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.079556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.079764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.079775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.080100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.080111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.080420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.080431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.080740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.080754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.080964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.080976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.081291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.081302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.081605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.081615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.081901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.081912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.082235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.082246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.082544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.082555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.082876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.735 [2024-11-06 11:11:14.082887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.735 qpair failed and we were unable to recover it. 00:29:22.735 [2024-11-06 11:11:14.083242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.083253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.083560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.083571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.083886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.083898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.084204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.084216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.084542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.084553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.084864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.084875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.085205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.085216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.085516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.085526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.085813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.085825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.086146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.086159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.086464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.086476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.086807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.087132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.087142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.087438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.087449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.087726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.087737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.088030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.088041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.088342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.088353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.088653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.088664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.089008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.089019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.089329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.089340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.089644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.089656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.089979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.089990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.090322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.090333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.090641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.090652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.090956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.090968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.091269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.091280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.091563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.091574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.091879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.091890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.092191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.092202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.092528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.092539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.092867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.092878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.093094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.093105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.093414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.093425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.093752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.093764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.736 [2024-11-06 11:11:14.094090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.736 [2024-11-06 11:11:14.094101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.736 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.094393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.094404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.094704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.094716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.095032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.095043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.095380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.095390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.095690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.095701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.096021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.096032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.096289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.096299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.096632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.096643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.096941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.096953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.097260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.097271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.097573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.097584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.097866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.097877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.098179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.098190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.098494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.098505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.098827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.098839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.099134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.099145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.099444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.099454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.099755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.099766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.100074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.100085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.100417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.100429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.100723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.100733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.101026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.101037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.101355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.101367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.101648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.101659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.101959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.101970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.102296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.102307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.102585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.102596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.102898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.102909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.103250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.103264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.103569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.103579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.103879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.103891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.104220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.104230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.104531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.104542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.104839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.104851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.105153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.105165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.105445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.105456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.105796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.105807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.106109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.106120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.737 [2024-11-06 11:11:14.106425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.737 [2024-11-06 11:11:14.106437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.737 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.106768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.106780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.107091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.107101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.107401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.107412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.107741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.107756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.108092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.108103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.108462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.108472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.108777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.108788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.109085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.109095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.109376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.109387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.109625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.109636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.109947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.109959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.110273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.110283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.110577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.110587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.110898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.110909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.111232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.111243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.111549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.111559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.111839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.111850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.112152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.112163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.112452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.112464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.112776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.112787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.113097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.113107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.113296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.113307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.113630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.113640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.114005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.114017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.114310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.114321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.114619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.114631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.114966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.114977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.115287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.115299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.115634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.115645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.115969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.115979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.116286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.116299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.116626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.116637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.116919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.116930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.117264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.117276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.117579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.117590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.117893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.117904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.118116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.118127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.118377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.118387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.738 qpair failed and we were unable to recover it. 00:29:22.738 [2024-11-06 11:11:14.118687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.738 [2024-11-06 11:11:14.118698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.739 qpair failed and we were unable to recover it. 00:29:22.739 [2024-11-06 11:11:14.118789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.739 [2024-11-06 11:11:14.118801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.739 qpair failed and we were unable to recover it. 00:29:22.739 [2024-11-06 11:11:14.119097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.739 [2024-11-06 11:11:14.119107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.739 qpair failed and we were unable to recover it. 00:29:22.739 [2024-11-06 11:11:14.119409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.739 [2024-11-06 11:11:14.119419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.739 qpair failed and we were unable to recover it. 00:29:22.739 [2024-11-06 11:11:14.119712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.739 [2024-11-06 11:11:14.119722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.739 qpair failed and we were unable to recover it. 00:29:22.739 [2024-11-06 11:11:14.120050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.739 [2024-11-06 11:11:14.120062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:22.739 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.120430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.120444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.120741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.120756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.121075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.121086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.121424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.121434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.121651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.121662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.122058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.122069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.122378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.122388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.122697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.122708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.123030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.123042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.123366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.123377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.123572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.123584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.123924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.123935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.124262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.124272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.124573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.124586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.124915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.124926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.125261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.125272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.125446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.125458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.125790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.125802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.126105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.126116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.126394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.126405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.126735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.015 [2024-11-06 11:11:14.126756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.015 qpair failed and we were unable to recover it. 00:29:23.015 [2024-11-06 11:11:14.127048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.127060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.127372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.127383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.127707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.127718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.128014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.128025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.128338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.128349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.128653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.128664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.128994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.129005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.129333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.129344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.129652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.129662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.129993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.130004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.130195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.130207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.130524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.130534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.130859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.130870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.131230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.131241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.131550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.131561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.131906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.131917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.132237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.132247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.132560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.132572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.132755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.132768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.132970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.132982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.133254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.133265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.133582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.133594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.133887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.133898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.134201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.134211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.134513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.134524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.134759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.134770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.135084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.135095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.135419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.135431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.135621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.135632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.135942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.135953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.136260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.136271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.136547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.136558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.136874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.136885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.137198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.137208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.137515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.137526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.137854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.137865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.138164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.138174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.138487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.138497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.138806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.016 [2024-11-06 11:11:14.138817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.016 qpair failed and we were unable to recover it. 00:29:23.016 [2024-11-06 11:11:14.139108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.139119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.139417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.139428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.139599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.139611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.139916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.139927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.140214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.140225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.140528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.140539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.140854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.140864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.141177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.141188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.141517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.141527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.141853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.141865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.142178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.142188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.142491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.142503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.142830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.142841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.143151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.143162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.143465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.143476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.143778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.143790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.144122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.144132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.144429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.144440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.144744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.144758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.145083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.145094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.145371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.145382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.145716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.145727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.145991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.146003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.146305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.146317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.146649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.146660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.147002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.147013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.147321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.147331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.147640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.147650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.147960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.147971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.148270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.148281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.148589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.148599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.148910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.148921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.149197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.149207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.149505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.149516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.149823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.149834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.150161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.150173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.150507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.150518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.150818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.150830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.151134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.017 [2024-11-06 11:11:14.151145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.017 qpair failed and we were unable to recover it. 00:29:23.017 [2024-11-06 11:11:14.151450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.151460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.151737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.151755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.152048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.152059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.152363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.152375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.152703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.152714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.153050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.153062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.153387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.153398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.153705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.153716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.154030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.154041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.154372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.154385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.154683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.154695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.154992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.155004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.155335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.155346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.155673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.155684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.155878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.155890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.156216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.156226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.156526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.156537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.156719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.156731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.157069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.157081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.157389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.157400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.157694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.157705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.158019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.158031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.158358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.158369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.158698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.158710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.159039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.159051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.159387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.159398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.159708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.159719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.160022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.160034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.160360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.160372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.160699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.160711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.160954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.160965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.161265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.161276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.161608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.161619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.161951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.161963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.162291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.162303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.162603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.162615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.162973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.162988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.163318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.163330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.163629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.018 [2024-11-06 11:11:14.163641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-11-06 11:11:14.163945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.163956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.164283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.164295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.164603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.164615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.164947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.164959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.165260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.165271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.165610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.165621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.165902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.165913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.166198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.166209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.166508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.166519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.166708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.166720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.167042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.167053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.167366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.167377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.167689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.167700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.168022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.168034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.168307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.168318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.168624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.168635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.168963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.168974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.169273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.169283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.169619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.169630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.169961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.169973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.170280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.170291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.170597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.170607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.170901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.170912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.171210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.171221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.171510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.171524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.171813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.171824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.172149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.172160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.172462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.172473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.172781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.172792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.173100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.173112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.173438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.173449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.173792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.173802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.173974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.019 [2024-11-06 11:11:14.173987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-11-06 11:11:14.174294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.174304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.174632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.174643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.174947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.174958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.175259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.175270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.175570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.175580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.175916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.175927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.176248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.176259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.176561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.176572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.176880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.176892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.177226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.177237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.177562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.177573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.177884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.177895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.178204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.178215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.178428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.178439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.178734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.178744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.179089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.179100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.179407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.179417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.179739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.179755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.180036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.180047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.180355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.180366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.180674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.180685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.181017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.181028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.181333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.181343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.181652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.181662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.181976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.181988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.182323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.182333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.182651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.182661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.182985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.182996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.183300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.183311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.183733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.183743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.184038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.184049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.184347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.184357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.184711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.184725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.185050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.185062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.185361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.185372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.185674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.185685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.185984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.185995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.186321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.020 [2024-11-06 11:11:14.186332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.020 qpair failed and we were unable to recover it. 00:29:23.020 [2024-11-06 11:11:14.186637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.186648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.186913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.186925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.187225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.187236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.187568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.187579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.187905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.187916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.188263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.188274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.188582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.188593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.188861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.188872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.189200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.189211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.189517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.189527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.189711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.189723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.190021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.190033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.190332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.190344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.190664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.190674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.190984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.190995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.191325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.191336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.191634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.191645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.191924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.191935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.192215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.192226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.192513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.192525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.192822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.192834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.193153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.193169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.193474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.193485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.193820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.193831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.194142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.194153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.194457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.194468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.194777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.194788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.195112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.195123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.195430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.195615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.195627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.195993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.196003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.196278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.196288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.196589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.196601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.196923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.196935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.197232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.197243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.197581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.197592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.197796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.197807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.198136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.198147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.198451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.021 [2024-11-06 11:11:14.198462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.021 qpair failed and we were unable to recover it. 00:29:23.021 [2024-11-06 11:11:14.198755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.198766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.199074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.199085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.199393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.199404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.199710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.199720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.199904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.199916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.200235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.200246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.200545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.200556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.200859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.200869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.201158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.201169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.201432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.201445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.201764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.201776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.202073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.202083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.202359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.202370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.202675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.202686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.203015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.203025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.203195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.203208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.203511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.203521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.203818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.203829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.204132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.204143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.204440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.204451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.204753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.204764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.205060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.205070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.205370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.205381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.205709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.205720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.206030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.206042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.206342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.206353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.206679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.206690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.206964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.206975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.207258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.207268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.207574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.207584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.207919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.207931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.208280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.208291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.208584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.208596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.208903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.208915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.209240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.209252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.209562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.209572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.209761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.209773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.210060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.210070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.022 qpair failed and we were unable to recover it. 00:29:23.022 [2024-11-06 11:11:14.210412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.022 [2024-11-06 11:11:14.210423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.210599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.210612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.210953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.210963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.211290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.211301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.211608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.211619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.211936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.211947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.212241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.212252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.212557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.212567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.212912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.212924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.213269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.213280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.213483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.213493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.213814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.213825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.214152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.214163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.214462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.214472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.214804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.214815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.215120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.215131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.215435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.215446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.215753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.215765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.216058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.216069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.216372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.216382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.216684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.216695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.216996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.217007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.217279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.217290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.217586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.217597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.217950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.218280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.218291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.218625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.218636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.218924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.218936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.219255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.219267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.219611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.219622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.219919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.219930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.220125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.220137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.220443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.220453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.220782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.220793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.221093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.221103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.221412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.221423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.221728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.221739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.222047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.222058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.222349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.222360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.023 qpair failed and we were unable to recover it. 00:29:23.023 [2024-11-06 11:11:14.222658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.023 [2024-11-06 11:11:14.222671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.223037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.223049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.223352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.223362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.223694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.223705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.223977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.223988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.224308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.224319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.224625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.224636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.224895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.224906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.225212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.225223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.225576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.225587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.225804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.225816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.226113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.226124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.226424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.226435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.226755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.226767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.227066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.227077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.227360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.227371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.227635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.227646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.227965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.227977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.228285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.228296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.228575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.228586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.228887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.228900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.229226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.229238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.229528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.229538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.229868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.229879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.230184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.230195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.230503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.230513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.230710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.230722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.231041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.231054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.231422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.231433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.231731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.231741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.232072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.232083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.232396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.232406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.232707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.232717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.233027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.233039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.024 qpair failed and we were unable to recover it. 00:29:23.024 [2024-11-06 11:11:14.233339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.024 [2024-11-06 11:11:14.233350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.233543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.233556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.233838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.233849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.234114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.234125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.234438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.234449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.234785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.234795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.235118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.235129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.235428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.235439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.235756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.235767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.236066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.236078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.236266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.236278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.236606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.236618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.236800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.236812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.237117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.237128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.237421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.237432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.237741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.237756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.238081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.238092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.238372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.238383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.238684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.238696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.239017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.239028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.239331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.239345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.239680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.240041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.240053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.240351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.240362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.240668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.240678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.241003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.241015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.241311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.241322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.241661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.241672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.241973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.241985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.242178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.242190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.242398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.242409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.242752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.242763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.243054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.243064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.243339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.243349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.243678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.243688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.243860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.243871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.244198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.244208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.244535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.244545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.244847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.025 [2024-11-06 11:11:14.244858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.025 qpair failed and we were unable to recover it. 00:29:23.025 [2024-11-06 11:11:14.245156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.245167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.245457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.245468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.245794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.245805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.246112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.246123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.246307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.246319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.246627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.246638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.246917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.246928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.247231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.247242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.247539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.247550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.247862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.247874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.248155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.248166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.248473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.248483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.248784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.248795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.249098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.249108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.249441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.249451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.249778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.249790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.249998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.250009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.250333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.250344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.250620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.250630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.250919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.250930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.251242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.251556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.251567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.251834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.251847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.252159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.252169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.252473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.252485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.252783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.252794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.253090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.253101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.253407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.253418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.253726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.253736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.254018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.254029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.254307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.254318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.254614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.254625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.254967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.254978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.255306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.255317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.255593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.255605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.255916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.255927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.256263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.256274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.256600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.256611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.256951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.026 [2024-11-06 11:11:14.256962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.026 qpair failed and we were unable to recover it. 00:29:23.026 [2024-11-06 11:11:14.257266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.257277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.257586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.257597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.257868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.257880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.258205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.258216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.258514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.258525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.258865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.258875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.259179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.259190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.259474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.259485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.259812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.259824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.260138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.260149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.260452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.260465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.260801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.260812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.261110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.261121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.261419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.261430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.261729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.261740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.262074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.262085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.262384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.262395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.262720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.262731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.263063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.263075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.263390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.263400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.263681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.263692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.264007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.264018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.264329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.264340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.264616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.264627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.264963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.264975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.265276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.265286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.265586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.265597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.265878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.265889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.266216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.266227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.266511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.266522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.266841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.266852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.267140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.267151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.267448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.267459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.267799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.267810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.268116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.268127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.268455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.268466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.268768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.268779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.269072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.269086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.269427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.027 [2024-11-06 11:11:14.269438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.027 qpair failed and we were unable to recover it. 00:29:23.027 [2024-11-06 11:11:14.269770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.269782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.270070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.270081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.270375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.270386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.270695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.270706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.271037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.271049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.271373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.271384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.271689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.271701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.272019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.272030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.272328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.272340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.272670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.272681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.273013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.273024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.273313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.273324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.273666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.273678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.274008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.274019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.274346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.274358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.274661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.274672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.274981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.274993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.275303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.275315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.275623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.275635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.275970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.275981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.276306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.276316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.276615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.276626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.276809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.276822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.277148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.277159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.277437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.277448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.277749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.277761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.278098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.278108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.278446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.278459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.278777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.278788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.279088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.279099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.279245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.279256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.279487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.279498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.279814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.279824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.280141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.280152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.280371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.280383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.280692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.280703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.281036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.028 [2024-11-06 11:11:14.281049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.028 qpair failed and we were unable to recover it. 00:29:23.028 [2024-11-06 11:11:14.281345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.281357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.281670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.281682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.281939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.281950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.282261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.282273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.282578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.282589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.282896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.282907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.283069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.283083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.283410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.283760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.283771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.284053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.284064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.284413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.284424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.284783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.284794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.285079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.285090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.285396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.285407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.285597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.285609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.285913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.285925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.286236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.286247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.286591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.286601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.286817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.286828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.287150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.287162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.287465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.287476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.287778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.287789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.288120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.288131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.288423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.288433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.288720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.288730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.289046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.289058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.289365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.289376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.289708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.289722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.290052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.290063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.290358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.290372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.290678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.290690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.291037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.291048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.291352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.291364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.291657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.291668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.291968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.291979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.292307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.029 [2024-11-06 11:11:14.292318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.029 qpair failed and we were unable to recover it. 00:29:23.029 [2024-11-06 11:11:14.292539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.292550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.292734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.292750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.293079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.293091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.293423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.293435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.293763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.293775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.294056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.294068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.294365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.294377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.294712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.294723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.295049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.295061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.295363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.295701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.295712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.296008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.296020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.296314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.296325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.296636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.296647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.296918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.296930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.297233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.297245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.297548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.297559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.297896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.297908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.298241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.298253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.298588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.298599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.298893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.298907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.299236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.299247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.299577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.299588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.299942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.299954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.300298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.300311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.300611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.300623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.301003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.301015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.301348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.301359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.301679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.301690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.302059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.302070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.302394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.302405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.302683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.302696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.303022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.303033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.303319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.303330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.303508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.303521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.303859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.303870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.304182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.304194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.304538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.304550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.304845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.030 [2024-11-06 11:11:14.304856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.030 qpair failed and we were unable to recover it. 00:29:23.030 [2024-11-06 11:11:14.305140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.305151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.305450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.305461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.305787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.305799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.306091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.306102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.306441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.306452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.306784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.306795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.307119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.307130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.307456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.307467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.307793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.307807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.308115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.308126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.308434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.308445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.308810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.308822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.309117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.309127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.309426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.309437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.309740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.309755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.310054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.310065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.310373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.310384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.310705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.310715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.310931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.310943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.311262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.311275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.311609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.311620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.311916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.311929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.312235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.312247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.312555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.312566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.312867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.312879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.313195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.313206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.313535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.313547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.313880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.313892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.314231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.314242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.314541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.314552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.314856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.314868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.315200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.315211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.315567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.315578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.315763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.315776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.316085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.316096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.316395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.316407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.316744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.316763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.317075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.031 [2024-11-06 11:11:14.317086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.031 qpair failed and we were unable to recover it. 00:29:23.031 [2024-11-06 11:11:14.317381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.317392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.317695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.317706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.318043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.318054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.318359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.318370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.318704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.318716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.319008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.319020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.319314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.319326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.319630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.319640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.319958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.319970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.320269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.320281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.320611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.320623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.320962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.320978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.321282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.321293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.321606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.321617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.321923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.321934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.322250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.322262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.322592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.322603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.322908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.322919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.323240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.323251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.323574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.323586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.323904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.323915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.324227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.324238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.324565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.324576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.324884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.324895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.325236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.325247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.325592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.325605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.325817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.325831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.326138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.326149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.326474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.326485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.326788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.326800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.327123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.327134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.327461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.327472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.327688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.327699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.328003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.328015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.328348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.328359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.328658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.328669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.328977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.328989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.329605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.329625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.329952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.032 [2024-11-06 11:11:14.329967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.032 qpair failed and we were unable to recover it. 00:29:23.032 [2024-11-06 11:11:14.330288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.330298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.330617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.330629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.330964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.330976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.331304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.331316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.331668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.331679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.331991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.332003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.332329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.332340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.332679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.332691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.332994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.333007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.333336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.333347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.333655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.333667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.333987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.333999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.334362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.334373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.334674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.334686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.334994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.335005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.335342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.335353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.335650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.335662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.335977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.335988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.336279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.336290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.336565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.336576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.336908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.336919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.337224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.337237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.337570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.337581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.337907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.337919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.338248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.338557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.338568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.338898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.339192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.339203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.339497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.339509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.339805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.339817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.340023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.340036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.340326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.340337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.340672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.340684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.340981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.340992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.033 qpair failed and we were unable to recover it. 00:29:23.033 [2024-11-06 11:11:14.341319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.033 [2024-11-06 11:11:14.341330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.341665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.341677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.341947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.341958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.342223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.342234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.342536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.342547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.342828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.342841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.343143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.343154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.343465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.343476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.343769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.343780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.344001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.344012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.344346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.344358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.344665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.344676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.344993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.345004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.345212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.345223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.345531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.345541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.345848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.345861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.346178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.346189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.346508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.346519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.346853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.346864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.347174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.347185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.347397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.347408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.347702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.347714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.348045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.348056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.348316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.348327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.348625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.348636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.348967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.348978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.349336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.349347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.349633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.349645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.349944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.349955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.350283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.350294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.350614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.350624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.350957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.350968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.351278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.351289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.351591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.351604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.351962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.351973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.352318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.352330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.352666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.352679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.352978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.352989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.353286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.034 [2024-11-06 11:11:14.353297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.034 qpair failed and we were unable to recover it. 00:29:23.034 [2024-11-06 11:11:14.353628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.353638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.353988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.354000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.354330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.354341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.354649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.354660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.354979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.354990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.355321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.355332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.355538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.355551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.355860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.355871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.356107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.356118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.356425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.356436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.356753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.356765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.356988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.356999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.357298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.357309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.357636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.357648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.357958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.357970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.358180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.358190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.358492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.358502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.358832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.358844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.359159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.359171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.359477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.359488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.359858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.359869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.360175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.360189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.360523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.360534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.360835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.360846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.361178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.361189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.361554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.361565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.361862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.361873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.362042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.362055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.362374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.362385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.362689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.362700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.363031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.363042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.363393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.363404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.363713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.363724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.364021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.364034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.364365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.364376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.364688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.364699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.365006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.365018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.365358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.365369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.365571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.035 [2024-11-06 11:11:14.365582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.035 qpair failed and we were unable to recover it. 00:29:23.035 [2024-11-06 11:11:14.365899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.365910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.366249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.366260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.366567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.366578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.366873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.366885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.367190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.367201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.367517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.367881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.367893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.368194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.368205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.368407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.368418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.368689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.368701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.368979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.368990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.369154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.369167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.369371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.369381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.369572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.369582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.369890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.369901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.370234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.370245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.370557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.370568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.370934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.370945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.371238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.371249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.371543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.371554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.371892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.371903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.372206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.372217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.372513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.372525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.372908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.372920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.373118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.373128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.373435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.373445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.373758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.373770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.374101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.374112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.374451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.374461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.374733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.374744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.374926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.374938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.375251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.375262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.375457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.375469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.375652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.375663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.375907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.375919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.376112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.376123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.376401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.376412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.376758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.376769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.377055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.036 [2024-11-06 11:11:14.377066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.036 qpair failed and we were unable to recover it. 00:29:23.036 [2024-11-06 11:11:14.377266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.377277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.377546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.377559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.377920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.377931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.378098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.378109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.378401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.378411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.378622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.378633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.378950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.378962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.379294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.379305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.379520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.379531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.379829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.379840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.380115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.380126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.380324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.380613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.380623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.380928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.380940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.381177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.381188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.381521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.381698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.381710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.382028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.382040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.382207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.382218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.382408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.382420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.382798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.382809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.383104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.383115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.383275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.383286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.383563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.383574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.383760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.383773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.384153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.384164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.384468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.384479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.384798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.384810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.385144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.385155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.385457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.385468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.385767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.385779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.386092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.386104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.386432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.386443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.386614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.386626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.386919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.386930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.387211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.387221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.387575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.387587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.387873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.387884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.388195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.037 [2024-11-06 11:11:14.388208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.037 qpair failed and we were unable to recover it. 00:29:23.037 [2024-11-06 11:11:14.388510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.388521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.388856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.388869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.389045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.389056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.389383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.389394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.389581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.389592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.389876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.389888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.390232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.390242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.390551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.390561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.390873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.390884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.391226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.391238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.391549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.391560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.391866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.391878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.392206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.392216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.392548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.392559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.392871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.392883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.393208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.393219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.393548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.393559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.393894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.393905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.394233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.394244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.394562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.394573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.394874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.394886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.395196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.395206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.395505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.395516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.395825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.395836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.396156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.396167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.396351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.396363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.396727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.396740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.397022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.397034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.397213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.397224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.397511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.397522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.397850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.397861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.398173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.398184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.398511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.398522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.398739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.398758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.038 qpair failed and we were unable to recover it. 00:29:23.038 [2024-11-06 11:11:14.399060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.038 [2024-11-06 11:11:14.399072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.399395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.399406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.399714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.399726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.400063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.400075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.400402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.400413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.400718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.400730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.401063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.401075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.401410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.401420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.401719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.401729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.402060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.402071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.402399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.402410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.402701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.402713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.403021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.403033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.403323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.403335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.403643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.403655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.403963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.403973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.404297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.404309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.404640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.404652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.404974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.404986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.405294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.405308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.405633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.405645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.405928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.405940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.406256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.406267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.406597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.406608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.406913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.406924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.407233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.407245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.407555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.407567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.407889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.407900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.408207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.408218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.408524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.408535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.408839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.408850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.409163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.409174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.409486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.409498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.409812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.409823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.410134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.410145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.410424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.410436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.410736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.410757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.411052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.411063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.411364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.039 [2024-11-06 11:11:14.411375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.039 qpair failed and we were unable to recover it. 00:29:23.039 [2024-11-06 11:11:14.411702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.411714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.412048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.412059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.412376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.412387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.412713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.412725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.413063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.413075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.413406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.413417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.413725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.413737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.414057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.414069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.414403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.414414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.414743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.414763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.415100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.415112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.415419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.415431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.415718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.415729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.416062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.416074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.416254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.416267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.416540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.416551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.416885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.416896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.417223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.417234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.417540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.417550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.417872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.417883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.418186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.418196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.418379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.418390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.418706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.418716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.418984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.418996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.419280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.419292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.419620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.419631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.419823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.419835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.420198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.420208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.420539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.420550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.420743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.420758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.040 [2024-11-06 11:11:14.420937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.040 [2024-11-06 11:11:14.420947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.040 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.421276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.421290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.421624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.421634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.421828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.421840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.422073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.422083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.422286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.422297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.422514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.422525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.422854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.422865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.423044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.423054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.423413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.423424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.423596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.423606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.423786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.423798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.424128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.424140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.424329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.424341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.424680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.424691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.424984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.424995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.425195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.425206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.425417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.425612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.425625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.425833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.425845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.426123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.426135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.426449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.426459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.426739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.426757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.427048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.427059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.427350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.427360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.427672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.427683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.316 [2024-11-06 11:11:14.428100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.316 [2024-11-06 11:11:14.428111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.316 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.428400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.428411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.428813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.428824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.429119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.429130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.429406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.429417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.429733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.429750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.429963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.429974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.430247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.430258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.430573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.430584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.430897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.430908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.431098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.431108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.431403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.431414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.431751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.431763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.432072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.432082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.432381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.432391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.432699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.432710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.433020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.433031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.433322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.433332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.433633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.433643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.433865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.433879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.434217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.434227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.434533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.434545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.434735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.434761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.435039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.435050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.435253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.435263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.435552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.435563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.435869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.435880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.436191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.436203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.436384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.436396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.436555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.436566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.436900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.436911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.437203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.437214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.437491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.437502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.437822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.437833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.438141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.438152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.438476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.438486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.438766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.438778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.439099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.439109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.317 [2024-11-06 11:11:14.439413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.317 [2024-11-06 11:11:14.439424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.317 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.439688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.439699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.440036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.440048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.440356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.440367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.440669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.440680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.441013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.441024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.441318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.441328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.441628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.441639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.441859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.441871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.442191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.442202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.442485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.442496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.442843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.442854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.443142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.443153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.443458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.443469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.443815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.443826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.444146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.444157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.444457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.444467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.444772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.444783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.445089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.445100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.445399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.445409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.445673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.445684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.446000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.446011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.446343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.446354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.446682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.446693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.446996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.447008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.447349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.447361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.447680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.447691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.448025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.448036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.448339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.448351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.448655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.448666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.448973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.448984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.449309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.449320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.449619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.449630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.449958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.449969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.450302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.450313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.450616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.450627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.450969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.450981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.451243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.451254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.451546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.318 [2024-11-06 11:11:14.451557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.318 qpair failed and we were unable to recover it. 00:29:23.318 [2024-11-06 11:11:14.451759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.451770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.452079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.452090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.452430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.452441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.452776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.452787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.453117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.453128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.453366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.453376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.453685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.453697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.454007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.454017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.454320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.454330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.454630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.454641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.454931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.454944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.455238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.455249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.455574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.455585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.455897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.455908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.456199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.456210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.456492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.456504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.456803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.456814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.457128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.457139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.457448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.457460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.457735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.457756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.458063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.458074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.458385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.458395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.458695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.458706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.459009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.459019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.459328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.459340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.459639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.459651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.459988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.460000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.460327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.460339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.460668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.460679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.460980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.460991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.461295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.461306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.461592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.461603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.461908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.461919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.462184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.462194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.462438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.462449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.462785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.462797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.463096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.463107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.463438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.463452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.463771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.319 [2024-11-06 11:11:14.463783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.319 qpair failed and we were unable to recover it. 00:29:23.319 [2024-11-06 11:11:14.464107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.464118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.464445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.464456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.464761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.464772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.465101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.465111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.465391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.465402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.465708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.465719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.466037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.466048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.466247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.466257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.466562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.466573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.466875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.466886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.467212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.467223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.467522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.467534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.467873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.467884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.468062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.468072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.468401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.468411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.468724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.468735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.469046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.469057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.469360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.469371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.469672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.469683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.469967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.469978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.470321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.470333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.470528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.470539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.470721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.470733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.471067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.471078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.471415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.471426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.471729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.471742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.472044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.472055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.472397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.472410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.472737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.472753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.473056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.473066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.473252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.473263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.473573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.320 [2024-11-06 11:11:14.473583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.320 qpair failed and we were unable to recover it. 00:29:23.320 [2024-11-06 11:11:14.473892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.473904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.474252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.474262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.474569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.474581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.474939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.474950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.475296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.475306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.475619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.475630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.475951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.475962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.476294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.476305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.476637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.476648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.476931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.476942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.477273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.477284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.477608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.477620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.477920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.477931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.478242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.478252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.478593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.478605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.478897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.478908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.479239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.479250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.479598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.479609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.479906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.479917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.480113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.480125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.480445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.480785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.480796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.481112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.481124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.481449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.481460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.481739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.481755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.482046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.482057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.482361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.482373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.482700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.482711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.483074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.483085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.483393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.483404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.483706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.483716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.483997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.484008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.484346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.484357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.484657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.484668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.484962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.484973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.485191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.485202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.485503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.485514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.485655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.485666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.321 [2024-11-06 11:11:14.485961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.321 [2024-11-06 11:11:14.485972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.321 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.486272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.486283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.486583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.486594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.486930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.486941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.487249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.487259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.487571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.487582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.487891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.487903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.488239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.488249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.488550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.488562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.488862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.488874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.489213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.489224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.489555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.489565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.489733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.489743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.490028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.490039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.490367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.490378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.490659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.490670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.490990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.491002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.491337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.491348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.491737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.491753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.492052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.492064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.492371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.492382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.492567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.492578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.492891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.492903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.493237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.493251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.493563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.493574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.493845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.493856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.494188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.494199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.494327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.494339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.494659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.494670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.494902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.494913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.495283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.495294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.495605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.495615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.495835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.495846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.496040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.496050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.496372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.496382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.496743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.496759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.496990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.497000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.497325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.497336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.497648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.497659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.322 [2024-11-06 11:11:14.497974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.322 [2024-11-06 11:11:14.497986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.322 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.498174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.498186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.498483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.498494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.498762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.498773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.499128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.499140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.499446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.499457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.499768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.499779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.500084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.500094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.500277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.500289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.500619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.500629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.500823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.500834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.501112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.501126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.501439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.501450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.501662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.501673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.501952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.501963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.502273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.502284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.502582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.502592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.502884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.502895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.503079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.503089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.503252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.503262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.503480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.503491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.503683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.503695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.504012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.504023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.504325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.504336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.504633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.504643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.504814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.504827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.505053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.505063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.505390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.505402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.505575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.505588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.505847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.505858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.506259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.506270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.506604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.506614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.506945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.506956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.507271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.507281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.507583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.507594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.507898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.507911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.508094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.508105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.508370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.508381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.508705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.508717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.323 qpair failed and we were unable to recover it. 00:29:23.323 [2024-11-06 11:11:14.509022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.323 [2024-11-06 11:11:14.509034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.509370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.509381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.509685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.509695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.510126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.510138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.510449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.510459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.510796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.510807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.511132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.511142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.511445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.511456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.511767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.511779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.511964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.511975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.512296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.512614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.512625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.512938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.512949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.513289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.513301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.513691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.513702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.514010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.514022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.514322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.514332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.514610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.514621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.514954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.514965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.515266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.515277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.515592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.515602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.515942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.515954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.516268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.516493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.516504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.516816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.516827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.517138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.517149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.517462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.517796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.517808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.518071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.518082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.518364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.518374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.518703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.518714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.519043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.519054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.519356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.519366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.519650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.519660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.519977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.519989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.520293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.520304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.520619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.520630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.324 qpair failed and we were unable to recover it. 00:29:23.324 [2024-11-06 11:11:14.520949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.324 [2024-11-06 11:11:14.520961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.521290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.521301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.521628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.521638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.521995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.522009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.522341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.522352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.522660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.522671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.522996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.523007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.523338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.523349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.523675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.523686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.523993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.524005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.524330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.524341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.524642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.524653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.524957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.524968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.525273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.525284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.525599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.525609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.525902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.525913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.526192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.526202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.526527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.526538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.526843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.526856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.527178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.527189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.527525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.527536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.527814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.527825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.528132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.528142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.528321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.528333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.528687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.528698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.529010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.529021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.529328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.529339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.529650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.529661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.529983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.529994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.530322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.530333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.530637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.530651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.530982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.530993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.531292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.325 [2024-11-06 11:11:14.531304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.325 qpair failed and we were unable to recover it. 00:29:23.325 [2024-11-06 11:11:14.531605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.531616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.531960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.531971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.532271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.532281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.532584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.532594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.532869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.532881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.533191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.533201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.533500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.533510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.533841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.533853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.534126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.534137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.534309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.534321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.534648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.534659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.534996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.535007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.535319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.535330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.535636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.535646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.535972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.535983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.536319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.536330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.536637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.536648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.536977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.536989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.537293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.537304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.537647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.537658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.537982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.537993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.538300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.538311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.538619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.538630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.538948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.538959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.539261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.539274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.539598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.539608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.539915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.539926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.540228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.540238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.540548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.540560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.540872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.540884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.541192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.541203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.541535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.541546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.541852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.541863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.542173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.542183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.542489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.542500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.542768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.542779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.543073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.543085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.543391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.543402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.543704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.326 [2024-11-06 11:11:14.543715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.326 qpair failed and we were unable to recover it. 00:29:23.326 [2024-11-06 11:11:14.544030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.544044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.544369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.544381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.544699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.544711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.545021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.545033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.545360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.545371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.545678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.545689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.545988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.545999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.546337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.546348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.546644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.546655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.546871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.546883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.547214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.547225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.547555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.547566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.547957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.547968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.548171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.548182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.548477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.548488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.548805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.548817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.549113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.549124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.549316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.549328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.549605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.549617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.549940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.549951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.550287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.550298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.550598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.550609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.550961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.550972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.551270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.551281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.551560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.551886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.551897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.552178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.552188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.552503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.552514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.552847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.552858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.553153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.553164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.553338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.553350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.553686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.553697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.554042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.554054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.554281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.554292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.554593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.554603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.554771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.554782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.554975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.554986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.555274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.555285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.327 qpair failed and we were unable to recover it. 00:29:23.327 [2024-11-06 11:11:14.555591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.327 [2024-11-06 11:11:14.555603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.555921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.555932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.556231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.556243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.556434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.556445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.556774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.556785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.556991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.557002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.557173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.557185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.557463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.557473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.557781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.557793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.558023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.558033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.558360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.558370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.558679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.558689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.559061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.559072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.559365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.559375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.559692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.559703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.560009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.560023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.560225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.560236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.560546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.560557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.560837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.560848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.561151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.561162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.561465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.561476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.561658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.561669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.561868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.561879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.562202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.562213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.562400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.562410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.562701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.562712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.563019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.563031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.563343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.563677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.563688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.564013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.564024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.564307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.564317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.564685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.564696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.564988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.564999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.565307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.565318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.565659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.565671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.565964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.565974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.566274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.566285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.566624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.566634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.566927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.566939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.328 qpair failed and we were unable to recover it. 00:29:23.328 [2024-11-06 11:11:14.567246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.328 [2024-11-06 11:11:14.567257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.567455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.567465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.567791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.567802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.568007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.568021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.568205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.568216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.568582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.568593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.568896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.568907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.569209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.569219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.569586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.569597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.569782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.569794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.569980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.569991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.570181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.570191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.570489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.570500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.570698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.570709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.570787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.570799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.571133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.571144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.571475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.571486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.571684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.571695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.572021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.572032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.572226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.572236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.572562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.572573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.572881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.573193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.573203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.573553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.573564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.573869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.573880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.574198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.574210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.574576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.574587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.574776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.574788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.575079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.575090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.575396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.575407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.575699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.575711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.576004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.576016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.576305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.576316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.576623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.576633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.576903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.576914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.577096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.577108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.577215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.577225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.577493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.329 [2024-11-06 11:11:14.577503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.329 qpair failed and we were unable to recover it. 00:29:23.329 [2024-11-06 11:11:14.577821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.577832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.578148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.578159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.578457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.578468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.578799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.578811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.579209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.579221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.579548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.579558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.579885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.579897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.580317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.580328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.580511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.580522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.580810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.580822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.580998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.581009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.581341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.581351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.581696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.581706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.581894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.581907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.582079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.582090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.582388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.582399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.582561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.582573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.582934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.582945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.583073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.583084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.583275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.583286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.583614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.583626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.583955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.583966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.584326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.584336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.584668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.584679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.584885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.584896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.585222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.585232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.585541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.585552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.585856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.585866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.586176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.330 [2024-11-06 11:11:14.586187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.330 qpair failed and we were unable to recover it. 00:29:23.330 [2024-11-06 11:11:14.586494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.586505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.586807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.586818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.587128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.587139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.587406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.587418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.587727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.587741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.588047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.588059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.588394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.588405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.588696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.588707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.589034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.589045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.589394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.589404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.589712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.589723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.590094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.590106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.590414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.590425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.590771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.590782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.591010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.591021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.591208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.591219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.591396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.591407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.591586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.591597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.591905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.591916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.592077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.592088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.592454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.592464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.592647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.592659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.592947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.592958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.593260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.593271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.593559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.593570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.593883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.593894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.594088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.594098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.594420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.594431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.594715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.594726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.595013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.595025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.595245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.595255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.595557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.595570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.595782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.595793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.596080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.596090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.596303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.596313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.596481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.596491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.596812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.331 [2024-11-06 11:11:14.596823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.331 qpair failed and we were unable to recover it. 00:29:23.331 [2024-11-06 11:11:14.597145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.597156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.597536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.597547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.597854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.597864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.598173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.598184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.598485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.598496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.598842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.598853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.599165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.599176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.599486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.599498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.599815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.599825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.600153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.600164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.600471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.600482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.600799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.600811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.601118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.601129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.601466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.601477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.601774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.601785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.602127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.602137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.602441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.602452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.602784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.602795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.603112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.603123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.603312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.603324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.603632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.603642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.603828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.603844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.604161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.604172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.604469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.604479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.604814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.604826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.605176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.605187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.605498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.605509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.605828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.605839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.606033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.606045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.606385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.606397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.606705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.606716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.606886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.606898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.607232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.607243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.607582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.607594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.607906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.607917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.332 [2024-11-06 11:11:14.608228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.332 [2024-11-06 11:11:14.608239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.332 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.608556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.608567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.608871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.608883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.609208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.609219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.609532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.609543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.609810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.609822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.610136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.610147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.610447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.610458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.610764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.610775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.611105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.611117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.611412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.611423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.611625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.611636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.611932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.611943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.612261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.612272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.612616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.612627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.612963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.612976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.613284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.613295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.613631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.613641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.613930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.613941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.614256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.614266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.614568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.614579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.614914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.614926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.615205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.615216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.615536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.615547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.615868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.615879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.616069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.616080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.616380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.616390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.616720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.616731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.617048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.617059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.617393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.617404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.617742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.617758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.618104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.618114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.618414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.618425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.618727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.618737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.619080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.619091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.619421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.619433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.619740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.619756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.619931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.619942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.620292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.620302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.333 [2024-11-06 11:11:14.620610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.333 [2024-11-06 11:11:14.620621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.333 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.620910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.620923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.621237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.621248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.621582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.621593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.621903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.621914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.622222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.622232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.622536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.622546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.622884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.622895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.623206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.623216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.623527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.623538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.623717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.623728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.624041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.624052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.624379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.624391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.624693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.624704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.624903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.624914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.625213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.625227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.625555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.625565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.625889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.626202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.626213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.626543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.626554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.626885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.626896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.627224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.627235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.627546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.627557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.627757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.627768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.628070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.628081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.628393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.628695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.628706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.629010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.629021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.629331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.629342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.629651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.629662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.629972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.629983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.630322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.630333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.630635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.630645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.334 [2024-11-06 11:11:14.630965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.334 [2024-11-06 11:11:14.630976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.334 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.631281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.631292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.631571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.631581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.631891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.631902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.632204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.632215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.632500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.632511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.632854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.632867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.633192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.633512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.633524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.633855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.633868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.634160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.634170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.634471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.634482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.634754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.634765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.635085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.635095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.635382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.635393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.635693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.635703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.636029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.636040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.636340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.636351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.636688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.636700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.637034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.637045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.637387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.637398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.637702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.637713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.638051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.638062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.638388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.638400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.638700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.638711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.639010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.639021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.639355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.639367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.639699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.639709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.640032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.640043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.640340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.640352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.640685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.640696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.640997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.641008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.641322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.641333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.641642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.641653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.641966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.641977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.642303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.642315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.642641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.335 [2024-11-06 11:11:14.642652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.335 qpair failed and we were unable to recover it. 00:29:23.335 [2024-11-06 11:11:14.642965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.642977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.643311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.643322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.643624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.643635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.643941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.643952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.644263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.644274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.644563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.644574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.644883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.644895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.645198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.645210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.645514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.645525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.645853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.645864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.646185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.646195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.646500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.646510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.646811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.646823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.647119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.647130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.647441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.647451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.647757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.647768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.648085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.648096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.648383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.648395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.648721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.648731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.649074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.649086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.649417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.649428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.649709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.649720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.650026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.650037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.650350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.650361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.650595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.650605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.650939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.650951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.651268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.651279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.651652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.651663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.651970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.651982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.652318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.652328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.652727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.652738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.653039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.653050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.653418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.653429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.653744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.653760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.654089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.654099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.654378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.654388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.654697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.654708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.655048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.655060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.336 [2024-11-06 11:11:14.655381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.336 [2024-11-06 11:11:14.655392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.336 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.655724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.655735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.656068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.656080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.656418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.656429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.656774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.656785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.657091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.657101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.657277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.657288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.657584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.657595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.657909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.657920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.658230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.658240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.658546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.658557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.658892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.658903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.659206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.659216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.659521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.659532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.659838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.659850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.660174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.660185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.660516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.660527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.660696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.660709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.661006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.661017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.661305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.661317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.661625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.661636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.661862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.661873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.662202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.662213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.662545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.662555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.662859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.662870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.663264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.663275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.663581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.663593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.663965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.663976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.664276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.664286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.664579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.664592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.664892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.664903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.665241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.665252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.665580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.665591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.665817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.665829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.666165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.666176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.666504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.666516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.337 qpair failed and we were unable to recover it. 00:29:23.337 [2024-11-06 11:11:14.666822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.337 [2024-11-06 11:11:14.666834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.667143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.667154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.667461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.667471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.667754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.667766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.668066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.668077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.668402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.668413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.668587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.668598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.668880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.668891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.669224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.669234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.669537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.669548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.669959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.669971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.670157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.670169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.670398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.670408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.670598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.670609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.670855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.670868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.671044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.671054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.671245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.671256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.671573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.671584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.671881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.671893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.672210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.672221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.672522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.672536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.672725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.672737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.673029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.673040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.673222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.673234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.673566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.673577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.673901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.673913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.674232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.674242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.674534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.674545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.674832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.674843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.675052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.675063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.675389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.675399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.675729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.675740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.676026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.676038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.676366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.676377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.676565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.676576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.676889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.676900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.677070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.677080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.677415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.677425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.677729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-11-06 11:11:14.677740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-11-06 11:11:14.678036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.678047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.678321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.678331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.678522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.678534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.678859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.678871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.679204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.679215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.679523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.679535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.679842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.679854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.680015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.680027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.680208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.680219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.680509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.680519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.680875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.680887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.681223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.681234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.681449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.681460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.681773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.681784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.682072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.682083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.682363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.682374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.682688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.682699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.683055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.683066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.683373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.683383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.683543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.683555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.683889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.683900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.684201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.684212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.684512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.684523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.684816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.684827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.685136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.685147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.685367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.685379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.685664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.685675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.685988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.686000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.686334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.686345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.686637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.686648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.686967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.686979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-11-06 11:11:14.687279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-11-06 11:11:14.687289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.687575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.687586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.687913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.687923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.688226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.688237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.688542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.688554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.688869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.688881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.689210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.689221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.689521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.689531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.689854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.689865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.690170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.690181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.690482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.690492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.690769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.690781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.691080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.691092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.691427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.691438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.691767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.691778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.692096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.692108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.692404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.692416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.692695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.692706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.693048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.693061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.693367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.693378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.693679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.693691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.693992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.694003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.694311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.694322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.694624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.694635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.694963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.694975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.695315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.695327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.695656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.695667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.696009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.696020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.696332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.696343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.696631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.696642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.696957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.696968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.697293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.697304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.697643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.697656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.697985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.697997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.698168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.698180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.698503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.698514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.698838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.698850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.699181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-11-06 11:11:14.699192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-11-06 11:11:14.699495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.699506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.699821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.699832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.700158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.700169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.700465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.700476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.700776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.700787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.701093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.701104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.701431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.701442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.701774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.701788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.702108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.702119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.702299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.702312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.702619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.702629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.702958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.702969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.703292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.703304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.703609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.703620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.703947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.703959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.704296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.704307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.704613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.704624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.704832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.704843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.705200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.705211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.705538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.705550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.705851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.705862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.706174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.706185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.706485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.706496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.706797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.706809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.707115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.707126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.707454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.707466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.707808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.707819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.708149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.708160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.708457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.708468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.708768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.708779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.709116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.709127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.709455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.709466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.709773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.709785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.710092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.710103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.710418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.710429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-11-06 11:11:14.710764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-11-06 11:11:14.710776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.711094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.711104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.711402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.711413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.711778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.711789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.712120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.712131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.712454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.712465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.712775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.712787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.713108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.713119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.713422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.713433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.713734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.713750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.714046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.714058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.714226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.714236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.714571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.714582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.714902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.714914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.715231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.715242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.715549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.715560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.715883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.715895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.716219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.716230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.716530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.716540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.716847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.716859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.717188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.717199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.717508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.717520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.717822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.717833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.718146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.718157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.718488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.718499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.718809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.718820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.719131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.719142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.719457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.719468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.719754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.719766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.720080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.720090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.720391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.720402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.720705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.720715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.721021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.721033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.721336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.721347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.721676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.721687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.721990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.722001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-11-06 11:11:14.722330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-11-06 11:11:14.722341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.722716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.722729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.723043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.723056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.723380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.723392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.723547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.723561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.723780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.723792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.724142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.724152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.724449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.724459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.724772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.724784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.725005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.725018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.725337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.725349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.726239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.726266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.726446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.726459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.726656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.726668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.726830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.726841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.727023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.727035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.727327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.727338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.727502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.621 [2024-11-06 11:11:14.727514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.621 qpair failed and we were unable to recover it. 00:29:23.621 [2024-11-06 11:11:14.727813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.727824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.728009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.728020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.728309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.728319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.728505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.728516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.728847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.728858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.729191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.729202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.729537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.729549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.729736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.729751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.730037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.730047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.730226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.730237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.730520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.730532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.730690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.730702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.731025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.731038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.731365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.731380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.731578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.731590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.731894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.731906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.732220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.732231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.732545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.732556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.732899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.732910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.733254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.733265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.733574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.733586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.733927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.733937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.734266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.734277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.734606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.734616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.734869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.734880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.735194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.735205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.622 [2024-11-06 11:11:14.735501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.622 [2024-11-06 11:11:14.735511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.622 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.735704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.735716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.736031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.736042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.736369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.736380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.736610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.736622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.736910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.736921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.737235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.737246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.737538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.737549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.737857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.737869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.738178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.738189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.738379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.738390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.738551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.738563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.738896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.738907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.739227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.739237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.739396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.739410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.739732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.739743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.739806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.739817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.740084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.740095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.740425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.740436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.740742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.740973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.740984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.741314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.741325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.741516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.741531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.741854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.741866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.742170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.742182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.742522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.742532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.623 qpair failed and we were unable to recover it. 00:29:23.623 [2024-11-06 11:11:14.742847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.623 [2024-11-06 11:11:14.742858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.743168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.743179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.743506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.743517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.743850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.743861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.744224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.744235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.744541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.744552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.744935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.744946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.745239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.745250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.745431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.745442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.745754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.745765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.746128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.746139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.746442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.746453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.746755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.746766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.746965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.746976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.747261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.747272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.747575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.747586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.747912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.747924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.748237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.748248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.748576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.748588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.748875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.748886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.749218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.749229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.749528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.749539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.749827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.749839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.750138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.750148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.750434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.750446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.750754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.750767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.624 qpair failed and we were unable to recover it. 00:29:23.624 [2024-11-06 11:11:14.750941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.624 [2024-11-06 11:11:14.750951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.751258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.751269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.751638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.751649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.752018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.752031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.752331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.752342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.752681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.752692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.752993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.753004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.753329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.753339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.753521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.753534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.753727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.753737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.754061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.754072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.754378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.754389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.754682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.754693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.754999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.755011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.755318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.755328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.755677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.755687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.755959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.755970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.756272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.756284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.756516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.756527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.756809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.756820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.757104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.757115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.757410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.757420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.757753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.757764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.758063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.758074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.758378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.758388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.758733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.758744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.759045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.759056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.759251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.759262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.759581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.759592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.759899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.759911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.760239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.760252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.625 [2024-11-06 11:11:14.760566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.625 [2024-11-06 11:11:14.760577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.625 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.760776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.760788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.761169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.761180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.761506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.761516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.761823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.761834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.762105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.762116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.762444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.762455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.762757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.762768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.763092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.763103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.763406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.763417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.763713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.763724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.764023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.764035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.764349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.764361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3444368 Killed "${NVMF_APP[@]}" "$@" 00:29:23.626 [2024-11-06 11:11:14.764613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.764625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.764912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.764924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.765234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.765246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:23.626 [2024-11-06 11:11:14.765547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.765559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:23.626 [2024-11-06 11:11:14.765890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.765900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.766001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.766010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.626 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.626 [2024-11-06 11:11:14.766410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.766441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.626 [2024-11-06 11:11:14.766948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.766978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.767234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.767244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.767552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.767560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.768031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.768061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.768383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.768393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.768619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.768628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.768936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.768946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.769278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.769286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.769503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.769512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.769861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.769870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.770174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.770182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.770496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.770505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.770823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.770833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.771161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.771170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.771359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.771368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.771559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.771568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.626 [2024-11-06 11:11:14.771812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.626 [2024-11-06 11:11:14.771821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.626 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.772119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.772128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.772444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.772452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.772723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.772732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.773114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.773122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.773451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.773460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.773804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.773814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.774152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.774161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3445290 00:29:23.627 [2024-11-06 11:11:14.774476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.774484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3445290 00:29:23.627 [2024-11-06 11:11:14.774795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.774805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3445290 ']' 00:29:23.627 [2024-11-06 11:11:14.775118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.775127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.627 [2024-11-06 11:11:14.775433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.775442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:23.627 [2024-11-06 11:11:14.775638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.775647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.627 [2024-11-06 11:11:14.775883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.775892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:23.627 [2024-11-06 11:11:14.776224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.776233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 11:11:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.627 [2024-11-06 11:11:14.776436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.776445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.776772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.776781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.777106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.777114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.777424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.777432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.777769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.777778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.778006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.778015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.778355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.778364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.778674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.778683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.778934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.778943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.779264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.779273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.779596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.779604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.779770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.779780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.780128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.780137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.780438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.627 [2024-11-06 11:11:14.780447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.627 qpair failed and we were unable to recover it. 00:29:23.627 [2024-11-06 11:11:14.780759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.780768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.781057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.781066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.781377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.781385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.781694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.781942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.781951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.782257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.782265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.782558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.782566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.782794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.782804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.783025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.783034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.783356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.783363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.783645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.783653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.783967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.783975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.784287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.784296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.784608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.784616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.784788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.784798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.785013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.785021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.785345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.785353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.785661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.785669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.786055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.786064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.786388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.786397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.786701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.786710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.787017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.787026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.787243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.787251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.787562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.787570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.787882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.787890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.788219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.788227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.788540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.788548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.788864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.788873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.789179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.789188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.789374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.789383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.789602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.789611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.789916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.789924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.790208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.790216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.790527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.790534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.790779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.790788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.791157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.791166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.791448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.791455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.791770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.791779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.628 [2024-11-06 11:11:14.792127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.628 [2024-11-06 11:11:14.792135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.628 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.792454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.792462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.792777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.792786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.793102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.793110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.793445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.793452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.793769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.793778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.794172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.794180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.794499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.794508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.794829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.794838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.795135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.795146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.795451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.795460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.795633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.795642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.795832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.795841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.796202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.796209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.796287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.796294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.796448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.796456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.796804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.796812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.797130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.797138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.797307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.797316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.797528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.797536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.797795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.797804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.798029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.798038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.798217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.798225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.798592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.798599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.798802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.798810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.799126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.799135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.799459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.799467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.799816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.799825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.800160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.800168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.800503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.800511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.800817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.800825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.801156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.801164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.801478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.801487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.801801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.801809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.802000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.802008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.802418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.802425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.802767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.802775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.803177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.803185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.803354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.629 [2024-11-06 11:11:14.803363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.629 qpair failed and we were unable to recover it. 00:29:23.629 [2024-11-06 11:11:14.803542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.803549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.803853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.803861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.804210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.804218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.804547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.804555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.804730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.804739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.805065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.805073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.805414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.805421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.805736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.805743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.805958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.805967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.806155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.806163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.806353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.806365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.806593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.806601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.806924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.806932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.807254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.807262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.807430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.807438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.807740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.807751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.807933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.807941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.808259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.808268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.808435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.808444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.808780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.808788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.808988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.808995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.809169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.809177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.809351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.809358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.809675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.809682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.810009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.810017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.810071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.810078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.810452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.810460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.810688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.810696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.811000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.811008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.811320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.811328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.811644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.811652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.812014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.812023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.812311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.812319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.812512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.812520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.812715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.812723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.812910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.812919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.813242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.813250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.813569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.813577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.630 [2024-11-06 11:11:14.813894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.630 [2024-11-06 11:11:14.813902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.630 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.813949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.813955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.814112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.814120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.814449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.814458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.814777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.814786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.814973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.814981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.815161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.815170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.815505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.815513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.815861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.815869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.816249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.816257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.816634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.816641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.816954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.816963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.817159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.817170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.817518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.817526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.817904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.817912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.818078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.818086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.818404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.818412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.818617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.818625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.818791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.818799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.819005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.819013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.819305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.819313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.819483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.819492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.819638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.819646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.819948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.819956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.820292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.820300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.820612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.820621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.820915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.820924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.821244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.821251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.821566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.821574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.821765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.821774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.822064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.822072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.822260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.822268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.822563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.822571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.822895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.631 [2024-11-06 11:11:14.822903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.631 qpair failed and we were unable to recover it. 00:29:23.631 [2024-11-06 11:11:14.823220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.823229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.823537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.823545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.823880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.823888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.824088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.824096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.824138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.824145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.824330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.824338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.824695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.824703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.825005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.825013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.825319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.825328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.825606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.825614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.825814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.825822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.826137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.826145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.826466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.826474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.826843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.826851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.827038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.827047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.827230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.827241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.827531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.827540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.827832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.827840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.828198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.828208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.828539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.828547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.828869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.828877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.829056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.829065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.829260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.829268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.829456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.829465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.829779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.829788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.830065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.830073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.830388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.830397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.830647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.830655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.830991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.830999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.831312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.831320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.831638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.831647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.831948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.831957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.832294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.832303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.832608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.832617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.832819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.832827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.833164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.833172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.833539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.833546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.833884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.632 [2024-11-06 11:11:14.833892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.632 qpair failed and we were unable to recover it. 00:29:23.632 [2024-11-06 11:11:14.834220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.834229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.834555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.834563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.834884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.834893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.835204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.835212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.835531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.835540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.835719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.835728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.835846] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:29:23.633 [2024-11-06 11:11:14.835893] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.633 [2024-11-06 11:11:14.836064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.836072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.836432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.836439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.836794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.836802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.837122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.837132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.837450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.837458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.837805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.837814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.838132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.838141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.838463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.838472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.838792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.838801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.839127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.839135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.839443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.839452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.839768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.839777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.840117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.840126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.840320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.840330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.840606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.840615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.840945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.840954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.841271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.841280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.841616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.841625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.841944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.841952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.842288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.842296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.842610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.842619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.842904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.842913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.843231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.843240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.843558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.843566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.843902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.843911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.844223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.844232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.844556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.844565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.844886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.844895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.845220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.845229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.845524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.845532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.845794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.845802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.633 [2024-11-06 11:11:14.846083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.633 [2024-11-06 11:11:14.846092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.633 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.846417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.846424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.846726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.846733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.847038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.847046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.847367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.847375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.847548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.847557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.847905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.847912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.848117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.848124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.848444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.848452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.848774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.848782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.849103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.849111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.849450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.849459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.849778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.849787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.849965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.849973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.850290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.850297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.850461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.850470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.850734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.850742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.851085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.851094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.851399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.851407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.851607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.851616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.851919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.851928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.852245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.852253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.852449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.852459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.852651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.852659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.852958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.852966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.853265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.853272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.853573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.853582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.853901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.853910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.854256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.854263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.854431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.854438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.854827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.854834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.855159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.855168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.855365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.855374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.855698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.855707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.856009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.856016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.856324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.856332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.856631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.856640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.856984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.856992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-11-06 11:11:14.857329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-11-06 11:11:14.857337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.857657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.857664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.858011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.858020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.858350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.858359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.858668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.858677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.859004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.859013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.859214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.859222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.859547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.859555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.859901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.859910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.860238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.860247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.860563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.860571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.860893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.860901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.861243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.861251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.861532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.861539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.861854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.861862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.862184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.862192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.862369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.862378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.862664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.862672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.863010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.863018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.863335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.863343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.863644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.863653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.863846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.863853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.864231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.864239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.864558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.864566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.864904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.864914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.865227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.865235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.865563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.865571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.865883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.865891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.866206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.866213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.866526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.866533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.866809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.866817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.867143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.867151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.867444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.867452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.867609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.867618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.867941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.867950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.868267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.868275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.868578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.868586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.868906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.868916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.869100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-11-06 11:11:14.869107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-11-06 11:11:14.869469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.869478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.869819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.869827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.870123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.870131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.870448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.870456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.870778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.870787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.871112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.871120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.871459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.871467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.871781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.871789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.871971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.871978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.872369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.872377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.872698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.872707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.873034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.873042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.873360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.873369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.873674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.873682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.873899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.873907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.874261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.874269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.874594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.874602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.874922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.874930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.875268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.875276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.875591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.875599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.875920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.875929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.876236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.876243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.876558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.876565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.876893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.877200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.877208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.877509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.877519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.877820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.877829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.878200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.878208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.878525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.878533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.878818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.878826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.879003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.879011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.879387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.879395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.879673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.879682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.880003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-11-06 11:11:14.880011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-11-06 11:11:14.880322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.880330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.880523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.880531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.880783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.880792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.881124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.881132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.881446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.881455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.881639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.881647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.881996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.882004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.882353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.882361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.882675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.882683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.882869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.882878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.883183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.883191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.883489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.883496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.883682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.883691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.883880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.883889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.884213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.884221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.884521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.884529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.884870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.884878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.885181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.885189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.885482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.885490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.885797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.885806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.886100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.886108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.886426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.886435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.886777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.886786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.887117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.887125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.887284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.887292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.887610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.887618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.887976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.887984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.888284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.888292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.888611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.888618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.888930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.888938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.889253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.889261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.889420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.889431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.889755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.889763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.890083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.890091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.890383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.890391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.890701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.890710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.891061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.891069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.891225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.891233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-11-06 11:11:14.891506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-11-06 11:11:14.891514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.891816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.891824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.892153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.892161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.892468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.892476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.892791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.892800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.893107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.893115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.893431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.893439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.893640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.893649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.893952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.893960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.894261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.894270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.894596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.894604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.894906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.894914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.895234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.895242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.895414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.895423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.895609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.895616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.895778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.895786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.896024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.896032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.896193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.896202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.896466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.896475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.896642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.896650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.896962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.896970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.897266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.897274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.897579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.897587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.897918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.897927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.898253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.898262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.898434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.898442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.898769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.898778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.898964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.898972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.899253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.899261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.899434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.899443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.899648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.899656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.899984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.899992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.900303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.900312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.900505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.900515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.900731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.900739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.901120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.901129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.901470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.901478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.901823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.901831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.902144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-11-06 11:11:14.902152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-11-06 11:11:14.902308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.902317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.902627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.902635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.902992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.903000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.903175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.903183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.903336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.903344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.903518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.903710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.903718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.903907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.903916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.904236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.904244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.904563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.904572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.904885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.904893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.905166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.905175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.905326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.905334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.905637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.905645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.906039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.906047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.906358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.906366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.906534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.906543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.906853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.906861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.907199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.907207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.907519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.907527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.907660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.907667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.907856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.907864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.908053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.908060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.908261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.908268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.908600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.908608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.908787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.908796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.909062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.909276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.909284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.909607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.909615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.909878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.909887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.910186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.910194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.910358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.910367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.910726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.910734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.910923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.910930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.911281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.911290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.911461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.911469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.911705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.911713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.911916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.911925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-11-06 11:11:14.912114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-11-06 11:11:14.912123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.912442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.912451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.912765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.912774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.912965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.912972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.913285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.913293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.913651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.913660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.913984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.913992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.914346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.914353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.914656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.914664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.914985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.914993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.915301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.915309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.915671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.915680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.916019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.916028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.916204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.916211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.916404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.916412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.916645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.916653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.916962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.916970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.917168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.917176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.917390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.917399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.917819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.917827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.918014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.918024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.918338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.918346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.918652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.918660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.918998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.919006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.919172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.919180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.919358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.919366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.919642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.919650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.920037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.920046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.920243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.920252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.920467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.920476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.920657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.920667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.920963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.921280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.921288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.921590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.921598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-11-06 11:11:14.921918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-11-06 11:11:14.921926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.922245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.922253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.922415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.922426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.922742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.922753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.922937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.922946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.923243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.923251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.923407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.923415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.923741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.923752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.923958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.923966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.924132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.924140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.924294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.924302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.924512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.924521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.924836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.924843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.925209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.925217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.925386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.925394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.925557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.925564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.925744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.925760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.926081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.926090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.926450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.926459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.926799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.926808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.927138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.927147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.927335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.927342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.927654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.927662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.927969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.927977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.928303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.928311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.928488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.928497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.928813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.928822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.929175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.929183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.929501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.929509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.929697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.929705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.930037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.930045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.930351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.930358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.930685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.930693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.931015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.931024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.931394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.931403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.931717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.931726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.931921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.931929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.932079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.641 [2024-11-06 11:11:14.932218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.932227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.932420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-11-06 11:11:14.932429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-11-06 11:11:14.932705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.932716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.933029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.933039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.933371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.933380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.933718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.933727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.934068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.934077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.934401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.934409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.934590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.934598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.934920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.934929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.935243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.935251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.935561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.935570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.935889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.935897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.936206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.936215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.936533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.936542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.936860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.936870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.937191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.937200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.937495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.937503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.937729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.937739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.938053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.938061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.938379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.938681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.938689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.938991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.939000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.939181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.939190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.939573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.939583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.939905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.939914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.940224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.940233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.940445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.940453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.940795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.940804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.941144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.941152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.941486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.941496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.941803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.941811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.942133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.942141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.942429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.942438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.942635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.942643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.942933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.942940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.943246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.943254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.943591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.943599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-11-06 11:11:14.943934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-11-06 11:11:14.943942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.944250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.944258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.944584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.944593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.944912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.944921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.945223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.945231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.945419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.945428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.945738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.945750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.946051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.946061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.946374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.946382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.946590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.946598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.946923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.946932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.947260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.947268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.947579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.947587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.947907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.947915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.948229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.948238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.948527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.948536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.948865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.948873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.949163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.949171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.949478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.949487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.949773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.949781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.950093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.950103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.950416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.950423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.950632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.950641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.950955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.950963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.951283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.951291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.951638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.951646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.951932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.951940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.952235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.952242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.952548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.952557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.952871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.952879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.953206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.953214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.953398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.953406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.953718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.953726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.953946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.953954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.954284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.954292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.954596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.954605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.954916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.954924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.955248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.955256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.955540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-11-06 11:11:14.955548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-11-06 11:11:14.955863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.955872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.956203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.956212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.956522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.956531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.956870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.956878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.957210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.957218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.957519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.957528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.957810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.957818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.958139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.958147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.958474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.958483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.958796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.958805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.959002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.959010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.959295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.959302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.959635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.959644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.959934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.959942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.960257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.960265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.960563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.960571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.960878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.960886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.961230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.961238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.961546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.961555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.961882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.961891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.962215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.962223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.962538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.962547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.962865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.962873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.963153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.963161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.963335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.963344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.963652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.963660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.963987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.963995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.964263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.964271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.964556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.964564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.964740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.964761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.965177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.965186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.965563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.965571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.965907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.965916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.966084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.966093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.966382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.966390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.966706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.966714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-11-06 11:11:14.967055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-11-06 11:11:14.967063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.967257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.967266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.967544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.967553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.967735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.967730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.645 [2024-11-06 11:11:14.967744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.967761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.645 [2024-11-06 11:11:14.967770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.645 [2024-11-06 11:11:14.967777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.645 [2024-11-06 11:11:14.967783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.645 [2024-11-06 11:11:14.968051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.968060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.968230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.968238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.968534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.968543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.968726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.968735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.969035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.969043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.969193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:23.645 [2024-11-06 11:11:14.969341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.969350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.969407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:23.645 [2024-11-06 11:11:14.969526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:23.645 [2024-11-06 11:11:14.969579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.969587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.969528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:23.645 [2024-11-06 11:11:14.969910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.969920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.970092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.970100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.970299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.970308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.970494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.970503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.970721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.970729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.970969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.970978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.971269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.971278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.971604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.971613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.972017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.972025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.972208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.972217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.972516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.972524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.972716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.972726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.973035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.973043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.973237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.973245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.973465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.973473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.973733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.973742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.973913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.973921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.974098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.974106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.974375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.974385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.974723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.974731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-11-06 11:11:14.975017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-11-06 11:11:14.975026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.975315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.975323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.975663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.975671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.975973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.975983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.976044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.976053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.976365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.976373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.976573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.976582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.976895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.976903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.977214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.977222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.977541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.977549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.977735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.977744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.978095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.978104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.978474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.978482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.978782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.978791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.979095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.979103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.979428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.979436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.979732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.979740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.980045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.980054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.980221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.980229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.980402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.980410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.980722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.980731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.981008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.981017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.981320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.981329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.981638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.981647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.981985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.981993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.982307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.982315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.982616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.982624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.982810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.982819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.983013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.983021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.983290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.983298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.983609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.983617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.983939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.983951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.984272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.984282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.984584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.984593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.984875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.984883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.985197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.985206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.985409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.985417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.985572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.985580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-11-06 11:11:14.986010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-11-06 11:11:14.986019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.986196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.986205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.986503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.986511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.986820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.986830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.987130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.987139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.987305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.987313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.987649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.987657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.987967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.987976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.988282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.988290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.988451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.988460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.988784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.988792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.988980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.988990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.989315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.989324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.989654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.989662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.989986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.989995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.990308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.990316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.990501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.990510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.990681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.990690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.991096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.991105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.991284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.991292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.991546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.991554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.991751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.991759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.992036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.992044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.992227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.992235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.992519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.992528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.992724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.992732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.992868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.992875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.993022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.993030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.993360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.993370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.993711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.993719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.993908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.993917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.994225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.994233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.994537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.994545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.994924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.994935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.995142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.995151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.995502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.995513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.995768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.995776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.995961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.995969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-11-06 11:11:14.996145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-11-06 11:11:14.996155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.996213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.996221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.996488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.996497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.996654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.996663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.996971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.996979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.997289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.997298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.997600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.997608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.997854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.997863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.998087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.998096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.998402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.998409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.998596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.998604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.998920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.998928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.999136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.999144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.999316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.999324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.999545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.999553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:14.999874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:14.999882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.000086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.000094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.000266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.000273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.000451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.000460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.000681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.000689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.000995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.001003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.001287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.001296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.001468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.001477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.001680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.001688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.001968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.001976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.002161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.002169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.002348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.002356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.002508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.002516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.002710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.002718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.002906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.002914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.003208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.003217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.003540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.003548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.003865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.003874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.004201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.004209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.004393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.004402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.004604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.004614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.004785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.004794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.005098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.005106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-11-06 11:11:15.005424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-11-06 11:11:15.005433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.005740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.005754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.005933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.005942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.006138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.006147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.006452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.006462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.006719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.006727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.007105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.007114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.007453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.007462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.007794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.007803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.008108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.008117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.008310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.008318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.008496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.008504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.008792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.008801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.009094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.009103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.009294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.009303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.009607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.009616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.009772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.009781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.010094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.010103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.010269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.010277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.010455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.010463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.010624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.010632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.011002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.011011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.011324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.011332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.011661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.011669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.011832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.011840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.012009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.012016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.012313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.012321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.012630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.012639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.012977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.012987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.013322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.013331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.013633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.013642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.013836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.013844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.014159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.014167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-11-06 11:11:15.014459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-11-06 11:11:15.014467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.014772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.014780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.015095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.015104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.015267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.015276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.015603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.015612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.015909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.015917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.016246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.016255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.016539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.016548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.016718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.016728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.017046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.017055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.017273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.017281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.017596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.017604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.017774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.017783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.018116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.018124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.018430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.018439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.018604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.018613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.018792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.018800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.019012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.019021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.019195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.019202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.019422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.019431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.019613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.019621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.019897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.019906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.020210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.020218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.020396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.020404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.020706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.020714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.020877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.020884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.021102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.021111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.021401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.021409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.021755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.021764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.022074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.022082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.022397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.022405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.022736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.022749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.022942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.022951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.023118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.023124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.023341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.023349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.023388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.023394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.023690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.023699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.024034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.024043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-11-06 11:11:15.024355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-11-06 11:11:15.024363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.024668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.024679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.024987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.024996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.025307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.025315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.025626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.025634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.025973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.025981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.026317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.026327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.026632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.026641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.026960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.026969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.027155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.027164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.027443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.027452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.027734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.027742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.028072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.028081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.028397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.028683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.028691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.028977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.028986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.029248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.029257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.029558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.029566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.029868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.029878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.030193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.030202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.030508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.030517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.030804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.030813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.031223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.031231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.031552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.031561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.031867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.031875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.032200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.032208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.032539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.929 [2024-11-06 11:11:15.032547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-11-06 11:11:15.032861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.032870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.033047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.033056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.033372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.033381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.033661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.033670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.033986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.033994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.034263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.034271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.034572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.034580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.034890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.034898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.035198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.035206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.035507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.035516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.035819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.035827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.036128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.036137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.036427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.036436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.036715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.036723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.037046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.037053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.037364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.037372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.037560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.037568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.037877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.037886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.038212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.038219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.038508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.038522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.038850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.038859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.039066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.039074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.039349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.039357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.039646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.039655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.039941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.039949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.040246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.040254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.040565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.040573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.040908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.040916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.041088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.041097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.041411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.041419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.041690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.041698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.042003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.042011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.042294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.042302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.042612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.042621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.042953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.042961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.043244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.043253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.043573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.043581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.043845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.930 [2024-11-06 11:11:15.043854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-11-06 11:11:15.044157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.044166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.044447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.044455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.044744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.045017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.045025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.045289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.045297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.045583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.045592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.045834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.045842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.046029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.046039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.046363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.046371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.046690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.046698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.046990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.046998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.047302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.047310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.047618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.047960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.047968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.048301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.048310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.048547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.048556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.048858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.048866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.049052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.049061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.049370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.049378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.049679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.049687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.049979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.049987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.050196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.050204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.050490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.050498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.050804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.050813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.051124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.051133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.051416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.051425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.051707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.051716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.052017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.052027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.052290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.052298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.052512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.052519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.052812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.052820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.053103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.053111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.053415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.053423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.053780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.053788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.054051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.054059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.054370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.054378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.054681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.054688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.054964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.054972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.931 qpair failed and we were unable to recover it. 00:29:23.931 [2024-11-06 11:11:15.055252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.931 [2024-11-06 11:11:15.055260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.055524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.055532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.055862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.055870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.056060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.056068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.056353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.056361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.056674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.056682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.056979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.056987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.057316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.057323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.057502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.057510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.057817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.057825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.058085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.058094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.058439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.058447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.058740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.058751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.059055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.059063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.059374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.059382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.059669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.059677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.059984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.059993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.060301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.060309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.060612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.060621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.060808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.060816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.061119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.061127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.061432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.061440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.061743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.061754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.062035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.062043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.062222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.062231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.062490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.062498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.062804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.062812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.063082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.063090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.063247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.063256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.063585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.063593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.063902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.063909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.064096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.064104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.064484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.064493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.064794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.064802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.065151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.065159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.065446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.065454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.065610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.065618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.065788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.065796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.932 [2024-11-06 11:11:15.066126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.932 [2024-11-06 11:11:15.066135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.932 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.066173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.066179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.066491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.066499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.066788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.066797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.066970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.066978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.067279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.067287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.067582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.067590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.067897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.067905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.068180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.068188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.068489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.068497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.068791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.068799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.068982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.068991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.069322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.069331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.069642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.069649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.069974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.069982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.070264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.070271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.070453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.070461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.070652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.070660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.070956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.070964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.071251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.071259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.071574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.071583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.071916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.071923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.072112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.072119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.072451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.072458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.072634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.072643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.072946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.072954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.073248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.073255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.073548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.073556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.073855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.073864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.074050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.074058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.074221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.074228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.074539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.074697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.933 [2024-11-06 11:11:15.074704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.933 qpair failed and we were unable to recover it. 00:29:23.933 [2024-11-06 11:11:15.074896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.074903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.075197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.075206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.075528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.075536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.075807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.075815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.075899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.075906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.076089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.076098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.076409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.076418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.076462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.076470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.076750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.076759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.076941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.076950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.077129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.077137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.077438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.077445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.077694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.077702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.077990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.077998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.078347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.078355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.078535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.078544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.078841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.078849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.079038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.079047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.079090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.079099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.079257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.079267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.079456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.079464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.079794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.079802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.080003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.080013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.080379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.080387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.080718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.080725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.080911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.080920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.081242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.081249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.081581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.081589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.081923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.081931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.082249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.082257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.082566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.082573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.082757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.082765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.082962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.082970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.083298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.083306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.083622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.083629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.083968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.083976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.084270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.084449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.934 [2024-11-06 11:11:15.084457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.934 qpair failed and we were unable to recover it. 00:29:23.934 [2024-11-06 11:11:15.084742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.084752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.085035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.085044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.085345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.085352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.085670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.085678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.085983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.085991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.086156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.086163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.086345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.086352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.086600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.086608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.086968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.086976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.087165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.087174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.087340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.087348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.087704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.087713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.088044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.088053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.088442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.088451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.088741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.088756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.089057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.089065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.089254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.089262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.089610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.089618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.089868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.089876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.090202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.090210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.090515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.090523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.090814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.090824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.091152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.091161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.091472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.091480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.091786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.091795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.092064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.092073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.092353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.092361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.092623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.092633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.092933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.092941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.093245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.093253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.093413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.093422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.093722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.093730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.094044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.094052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.094343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.094350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.094635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.094643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.094864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.094873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.095262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.095270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.095600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.095608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.935 qpair failed and we were unable to recover it. 00:29:23.935 [2024-11-06 11:11:15.095785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.935 [2024-11-06 11:11:15.095793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.096112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.096307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.096315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.096651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.096659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.096961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.096970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.097281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.097288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.097592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.097600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.097948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.097957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.098262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.098269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.098428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.098436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.098706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.098714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.099063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.099071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.099347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.099355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.099656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.099665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.099949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.099957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.100242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.100250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.100531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.100539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.100858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.100866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.101170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.101178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.101464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.101471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.101657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.101664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.101959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.101966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.102284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.102292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.102623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.102633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.102968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.102976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.103286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.103293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.103610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.103618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.103955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.103963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.104217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.104225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.104489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.104497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.104787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.104795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.105066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.105074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.105359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.105366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.105683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.105691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.106003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.106012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.106300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.106308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.106595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.106604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.106912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.106920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.107245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.936 [2024-11-06 11:11:15.107253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.936 qpair failed and we were unable to recover it. 00:29:23.936 [2024-11-06 11:11:15.107575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.107582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.107882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.107890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.108215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.108223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.108556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.108564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.108860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.108867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.109115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.109123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.109430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.109438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.109757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.109766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.110045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.110053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.110335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.110342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.110611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.110619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.110904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.110912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.111216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.111224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.111519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.111527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.111830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.111838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.112106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.112114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.112453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.112461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.112646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.112655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.112984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.112992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.113289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.113297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.113583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.113590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.113919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.113927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.114188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.114196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.114498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.114506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.114797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.114807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.115008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.115016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.115318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.115325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.115637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.115645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.115824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.115833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.116117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.116125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.116398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.116406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.116701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.116709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.117030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.117038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.117333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.117341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.117663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.117672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.118049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.118058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.118390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.118398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.118676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.937 [2024-11-06 11:11:15.118683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.937 qpair failed and we were unable to recover it. 00:29:23.937 [2024-11-06 11:11:15.118974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.118982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.119284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.119292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.119603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.119611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.119926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.119934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.120237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.120245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.120545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.120553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.120884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.120893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.121201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.121209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.121515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.121523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.121676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.121683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.122012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.122020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.122354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.122362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.122686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.122693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.123008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.123016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.123347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.123355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.123686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.123694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.124002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.124009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.124314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.124323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.124607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.124615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.124791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.124799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.125171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.125178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.125502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.125510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.125852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.125860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.126159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.126167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.126474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.126482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.126639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.126645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.126967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.126976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.127270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.127277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.127434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.127441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.127720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.127727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.128065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.128073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.128385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.128393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.128701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.128710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-11-06 11:11:15.129029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-11-06 11:11:15.129036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.129322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.129330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.129618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.129625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.129911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.130280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.130287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.130620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.130629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.130926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.130933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.131250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.131258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.131572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.131580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.131878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.131886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.132209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.132217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.132524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.132533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.132814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.132822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.133175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.133183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.133517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.133525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.133831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.133839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.134114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.134122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.134423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.134431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.134605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.134613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.134919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.134927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.135240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.135249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.135431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.135439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.135751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.135759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.136068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.136075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.136382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.136390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.136682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.136689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.136990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.136998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.137339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.137347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.137543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.137551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.137856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.137864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.138183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.138191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.138502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.138511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.138858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.138865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.139197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.139206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.139541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.139549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.139854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.139862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.140062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.140070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.140232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.140239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.140560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-11-06 11:11:15.140567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-11-06 11:11:15.140858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.140866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.141176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.141183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.141525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.141533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.141860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.141868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.142154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.142162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.142463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.142470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.142783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.142792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.143096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.143104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.143407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.143414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.143724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.143732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.144079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.144087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.144372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.144380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.144693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.144701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.145009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.145017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.145310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.145317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.145603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.145611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.145912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.145920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.146269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.146277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.146466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.146475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.146832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.146839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.147141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.147149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.147488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.147496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.147787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.147795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.148096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.148104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.148280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.148287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.148604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.148612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.148916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.148924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.149268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.149276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.149580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.149587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.149906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.149914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.150224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.150232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.150518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.150526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.150830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.150838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.151158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.151166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.151540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.151550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.151912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.151920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.152220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.152227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.152532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.152539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-11-06 11:11:15.152752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-11-06 11:11:15.152759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.153050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.153057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.153372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.153380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.153679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.153688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.153994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.154002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.154335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.154343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.154652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.154660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.154938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.154946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.155255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.155263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.155604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.155613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.155975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.155982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.156255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.156263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.156604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.156612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.156920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.156928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.157113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.157122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.157305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.157312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.157636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.157643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.157959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.157967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.158263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.158271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.158590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.158598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.158903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.158912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.159196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.159204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.159542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.159550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.159857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.159865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.160180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.160187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.160518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.160525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.160858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.160867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.161190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.161197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.161509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.161517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.161860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.161869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.162179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.162225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.162231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.162561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.162569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.162885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.162893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.163164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.163171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.163354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.163362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.163676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.163686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.163991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.163999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.164299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-11-06 11:11:15.164308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-11-06 11:11:15.164600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.164609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.164776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.164784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.165100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.165107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.165377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.165385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.165680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.165688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.165993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.166001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.166311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.166319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.166469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.166475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.166802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.166810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.166974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.166981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.167032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.167039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.167341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.167349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.167636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.167644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.167807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.167815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.168161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.168168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.168493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.168501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.168833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.168841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.169136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.169144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.169308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.169317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.169574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.169581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.169868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.169876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.170191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.170199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.170502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.170511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.170823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.170832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.171172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.171180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.171515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.171522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.171826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.171834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.172149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.172157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.172343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.172351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.172633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.172640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.172801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.172809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.173082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.173089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.173373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-11-06 11:11:15.173381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-11-06 11:11:15.173715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.173723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.174028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.174036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.174337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.174345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.174503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.174510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.174837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.174848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.175031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.175039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.175307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.175315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.175488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.175497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.175805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.175812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.176005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.176013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.176225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.176233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.176440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.176448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.176737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.176745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.177085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.177093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.177277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.177285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.177657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.177664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.177971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.177979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.178300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.178308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.178615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.178623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.178918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.178926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.179266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.179273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.179316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.179323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.179617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.179626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.179889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.179898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.180191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.180201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.180358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.180367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.180643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.180652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.180882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.180890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.181046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.181053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.181240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.181247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.181418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.181427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.181590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.181597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.181881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.181889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.182148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.182157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.182488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.182496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.182678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.182686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.183010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.183018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.183202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-11-06 11:11:15.183210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-11-06 11:11:15.183389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.183397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.183614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.183623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.183792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.183800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.184090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.184098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.184397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.184405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.184705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.184714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.185030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.185039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.185367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.185376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.185675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.185683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.185863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.185871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.186166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.186174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.186506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.186514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.186700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.186708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.186890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.186898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.187066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.187073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.187407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.187414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.187696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.187704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.187968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.187976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.188232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.188240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.188435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.188443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.188781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.188789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.188874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.188880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.189069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.189076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.189242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.189250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.189513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.189521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.189803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.189811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.190069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.190076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.190410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.190418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.190723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.190731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.191003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.191011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.191284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.191293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.191627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.191635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.191930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.191937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.192242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.192250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.192570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.192578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.192754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.192763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.193101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.193109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.193411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-11-06 11:11:15.193418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-11-06 11:11:15.193723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.193730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.194058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.194066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.194350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.194358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.194668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.194676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.195054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.195062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.195216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.195223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.195535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.195542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.195857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.195865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.196170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.196178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.196480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.196488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.196787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.196795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.197106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.197114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.197425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.197433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.197769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.197777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.198084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.198091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.198390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.198398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.198668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.198677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.198850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.198857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.199155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.199162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.199469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.199477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.199781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.199790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.200099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.200107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.200392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.200400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.200742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.200754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.201026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.201034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.201321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.201329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.201618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.201626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.201904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.201912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.202225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.202233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.202561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.202569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.202905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.202914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.203197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.203205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.203520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.203528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.203816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.203824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.204165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.204172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.204475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.204488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.204749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.204758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-11-06 11:11:15.205077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-11-06 11:11:15.205084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.205413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.205421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.205721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.205728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.206001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.206009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.206291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.206299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.206595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.206603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.206913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.206921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.207222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.207230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.207456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.207464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.207758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.207767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.208187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.208194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.208498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.208506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.208796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.208804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.209138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.209145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.209448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.209456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.209765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.209773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.210107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.210114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.210446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.210454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.210783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.210792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.211096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.211103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.211390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.211397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.211725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.211733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.211987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.211995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.212313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.212321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.212665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.212672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.212850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.212858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.213160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.213168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.213481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.213489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.213819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.213827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.214163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.214171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.214472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.214479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.214741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.214752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.215066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.215074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.215404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.215412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.215584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.215593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.215920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.215928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.216111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.216119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.216463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-11-06 11:11:15.216470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-11-06 11:11:15.216764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.216774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.217081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.217089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.217412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.217419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.217708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.217716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.218031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.218039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.218339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.218347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.218633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.218641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.218957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.218965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.219231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.219239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.219554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.219562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.219862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.219870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.220165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.220173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.220507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.220514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.220819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.220827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.221119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.221127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.221412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.221420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.221590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.221598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.221906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.221913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.222227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.222235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.222530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.222538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.222856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.222864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.223172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.223180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.223469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.223476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.223765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.223773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.224054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.224061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.224380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.224388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.224725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.224733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.224917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.224926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.225297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.225304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.225607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.225615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.225880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.225888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.226165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.226173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-11-06 11:11:15.226494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-11-06 11:11:15.226501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.226683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.226691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.226987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.226994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.227198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.227206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.227533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.227541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.227849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.228121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.228129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.228415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.228422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.228739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.228752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.229083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.229091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.229427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.229435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.229657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.229666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.229990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.229998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.230207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.230215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.230444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.230451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.230631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.230639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.230962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.230971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.231277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.231285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.231594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.231602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.231947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.231955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.232114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.232122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.232375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.232383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.232711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.232720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.232764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.232772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.233147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.233155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.233488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.233497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.233786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.233794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.233972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.233980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.234135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.234142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.234451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.234459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.234794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.234802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.234848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.234854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.235030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.235037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.235356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.235364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.235508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.235515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.235838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.235846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.236160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.236471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.236479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.236786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-11-06 11:11:15.236793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-11-06 11:11:15.237054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.237062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.237241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.237250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.237410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.237418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.237727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.237735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.237920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.237928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.238202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.238210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.238522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.238530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.238838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.238847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.239105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.239113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.239275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.239284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.239619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.239627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.239809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.239817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.240138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.240145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.240480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.240488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.240670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.240678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.240990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.240998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.241176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.241184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.241468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.241476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.241780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.241788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.242120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.242128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.242460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.242468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.242645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.242653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.242913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.242921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.243225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.243233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.243555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.243563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.243901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.243910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.244085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.244093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.244399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.244407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.244691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.244700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.244991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.244999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.245307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.245315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.245496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.245503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.245822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.245830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.245991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.245998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.246305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.246312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.246351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.246357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.246523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.246531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.246836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-11-06 11:11:15.246844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-11-06 11:11:15.247005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.247012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.247177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.247184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.247429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.247438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.247768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.247776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.248120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.248127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.248416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.248424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.248688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.248696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.248844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.248851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.249177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.249184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.249484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.249492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.249809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.250000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.250010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.250318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.250326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.250634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.250642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.250967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.250976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.251319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.251327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.251511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.251518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.251812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.251820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.252040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.252048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.252384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.252392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.252691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.252699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.252881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.252889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.253056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.253363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.253371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.253685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.253692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.253869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.253877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.254209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.254216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.254505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.254512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.254827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.254835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.255137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.255146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.255413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.255421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.255703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.255711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.255892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.255900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.256207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.256215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.256547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.256555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.256839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.256847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.257091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.257099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-11-06 11:11:15.257409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-11-06 11:11:15.257417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.257702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.257711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.257891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.257900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.258059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.258066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.258397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.258405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.258729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.258737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.259049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.259058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.259372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.259380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.259678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.259687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.259993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.260001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.260283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.260291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.260613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.260621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.260799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.260807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.261077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.261084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.261376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.261386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.261664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.261671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.261984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.261992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.262295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.262303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.262587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.262595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.262907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.262915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.263243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.263250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.263551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.263558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.263825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.263833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.264161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.264169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.264365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.264374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.264556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.264564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.264862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.264870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.265141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.265150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.265345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.265353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.265658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.265666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.265958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.265966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.266275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.266283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.266583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.266591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.266902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.266910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.267193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.267201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.267504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.267512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.267819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.267827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.268218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.268227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-11-06 11:11:15.268558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-11-06 11:11:15.268566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.268873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.268881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.269202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.269209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.269518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.269526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.269811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.269819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.270127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.270135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.270442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.270449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.270753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.270761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.270953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.270961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.271224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.271232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.271520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.271528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.271858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.271866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.272167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.272175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.272471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.272478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.272675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.272683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.272959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.272966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.273297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.273306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.273608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.273616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.273904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.273912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.274219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.274227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.274536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.274545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.274717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.274725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.275043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.275052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.275415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.275423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.275707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.275715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.275902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.275911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.276239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.276247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.276559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.276568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.276907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.276915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.277241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.277575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.277583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.277891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.277899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.278283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.278291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.278657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.278666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.278836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-11-06 11:11:15.278845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-11-06 11:11:15.279106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.279115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.279325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.279333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.279632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.279640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.279967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.279975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.280286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.280294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.280626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.280633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.280951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.280959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.281294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.281303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.281613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.281622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.281913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.281921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.282231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.282239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.282410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.282418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.282729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.282736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.283027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.283035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.283344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.283352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.283552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.283560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.283868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.283876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.284171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.284179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.284510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.284518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.284819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.284828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.285149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.285158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.285485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.285496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.285828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.285837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.286136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.286144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.286443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.286451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.286761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.286769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.287046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.287054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.287359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.287367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.287567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.287576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.287899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.287907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.288239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.288247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.288557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.288565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.288870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.288878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.289180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.289188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.289487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.289495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.289806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.289814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.290130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.290139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.290449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-11-06 11:11:15.290458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-11-06 11:11:15.290769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.290777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.290939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.290947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.291161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.291168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.291502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.291509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.291719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.291728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.291895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.291902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.292232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.292239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.292417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.292425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.292737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.292744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.293039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.293047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.293310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.293318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.293648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.293656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.293831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.293839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.294008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.294015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.294275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.294283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.294477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.294486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.294803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.294811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.294961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.294968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.295121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.295128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.295464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.295473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.295773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.295781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.295973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.295981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.296313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.296321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.296652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.296662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.296844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.296851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.297028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.297036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.297355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.297363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.297653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.297661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.297844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.297853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.298144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.298152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.298338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.298345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.298647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.298654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.298957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.298965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.299276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.299283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.299431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.299439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.299711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.299718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.299901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.299910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.300239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-11-06 11:11:15.300247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-11-06 11:11:15.300552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.300560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.300726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.300734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.301014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.301021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.301235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.301243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.301561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.301569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.301876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.301884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.302197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.302205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.302534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.302542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.302850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.302858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.303173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.303181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.303482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.303490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.303646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.303653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.303937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.303946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.304256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.304265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.304415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.304422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.304733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.304741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.304955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.304964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.305289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.305297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.305599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.305607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.305904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.305912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.306079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.306088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.306419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.306427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.306609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.306617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.306947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.306955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.307214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.307222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.307554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.307563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.307865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.308208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.308216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.308402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.308410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.308724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.308731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.308996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.309005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.309324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.309331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.309678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.309686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.310011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.310019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.310201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.310210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.310252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.310261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.310523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.310531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.310693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-11-06 11:11:15.310702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-11-06 11:11:15.311007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.311015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.311325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.311333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.311631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.311639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.311972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.311980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.312142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.312149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.312449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.312457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.312722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.312730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.313010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.313018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.313364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.313372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.313657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.313665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.313928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.313936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.314063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.314070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.314227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.314235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.314361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.314370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.314677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.314685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.314984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.314992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.315155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.315163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.315316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.315323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.315623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.315631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.315794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.315802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.315991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.315999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.316324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.316331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.316656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.316665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.317025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.317033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.317339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.317347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.317630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.317637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.317807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.317815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-11-06 11:11:15.317977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-11-06 11:11:15.317986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdac000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Write completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Write completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Write completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Write completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.956 Read completed with error (sct=0, sc=8) 00:29:23.956 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 [2024-11-06 11:11:15.318759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Read completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 Write completed with error (sct=0, sc=8) 00:29:23.957 starting I/O failed 00:29:23.957 [2024-11-06 11:11:15.319557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.957 [2024-11-06 11:11:15.320112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.320153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.320353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.320368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.320694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.320705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.320900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.320912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.321117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.321128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.321443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.321455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.321654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.321665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.321855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.321868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.322187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.322198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.322502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.322513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.322813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.322825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.323161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.323172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.323466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.323476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.323824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.323836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.324172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.324184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.324373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.324383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.324697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.324708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.325007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.325019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.325364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.325375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.325709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.325720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.326061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.326073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-11-06 11:11:15.326375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-11-06 11:11:15.326386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.326699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.326710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.326916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.326928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.327248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.327259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.327465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.327475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.327815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.327827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.328163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.328173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.328460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.328471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.328828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.328840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.329179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.329189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.329524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.329535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.329913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.329925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.330226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.330237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.330580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.330591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.330921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.330933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.331266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.331276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.331579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.331590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.331917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.331928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.332100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.332113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-11-06 11:11:15.332452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-11-06 11:11:15.332463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.332659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.332671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.332988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.333002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.333343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.333354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.333659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.333670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.334006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.334017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.334371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.334381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.334677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.334688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.334996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.335008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.335339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.335351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.335537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.335549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.335895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.335907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.336211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.336223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.235 [2024-11-06 11:11:15.336530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.235 [2024-11-06 11:11:15.336543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.235 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.336875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.336887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.337191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.337201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.337544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.337554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.337859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.337872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.338172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.338183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.338357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.338368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.338685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.338695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.339022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.339033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.339374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.339384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.339685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.339696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.340033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.340044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.340351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.340362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.340708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.340989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.341001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.341308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.341320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.341653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.341664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.341948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.341958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.342291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.342302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.342483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.342494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.342809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.342820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.343151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.343162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.343470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.343480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.343794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.343806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.344106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.344117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.344410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.344421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.344726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.236 [2024-11-06 11:11:15.344737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.236 qpair failed and we were unable to recover it. 00:29:24.236 [2024-11-06 11:11:15.345041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.345055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.345384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.345395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.345735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.345751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.346072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.346083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.346420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.346431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.346732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.346743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.347077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.347087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.347394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.347404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.347618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.347629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.347802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.347812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.348215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.348226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.348528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.348539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.348874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.348886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.349218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.349229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.349513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.349523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.349859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.349869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.350203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.350213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.350517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.350528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.350804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.350816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.351148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.351158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.351332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.351343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.351646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.351657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.351944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.351955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.352263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.352273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.352589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.352599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.352904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.352915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.237 [2024-11-06 11:11:15.353186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.237 [2024-11-06 11:11:15.353197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.237 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.353503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.353516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.353818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.353830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.353992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.354002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.354198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.354209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.354466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.354477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.354778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.354789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.355087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.355098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.355443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.355453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.355791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.355802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.356094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.356104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.356299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.356311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.356669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.356680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.356871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.356883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.357162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.357173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.357508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.357519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.357850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.357861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.358052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.358063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.358332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.358342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.358520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.358531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.358865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.358875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.359111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.359121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.359422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.359433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.359635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.359646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.359947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.359958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.360291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.360302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.360593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.360604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.238 [2024-11-06 11:11:15.360785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.238 [2024-11-06 11:11:15.360796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.238 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.360969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.360980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.361033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.361044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.361322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.361333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.361644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.361655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.361838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.361850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.362181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.362489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.362500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.362685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.362696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.362872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.362883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.363069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.363080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.363350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.363361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.363678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.363688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.363914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.363925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.364225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.364236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.364577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.364589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.364738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.364751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.365053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.365065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.365223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.365234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.365547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.365557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.365858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.365869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.366189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.366200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.366361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.366374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.366671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.366681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.366986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.366998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.367304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.367315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.367544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.367554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.239 qpair failed and we were unable to recover it. 00:29:24.239 [2024-11-06 11:11:15.367768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.239 [2024-11-06 11:11:15.367779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.367958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.367968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.368284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.368296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.368628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.368639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.368823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.368834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.369120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.369130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.369393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.369405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.369743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.369758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.370036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.370047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.370354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.370365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.370669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.370679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.370841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.370853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.371035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.371045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.371367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.371379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.371689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.371699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.371890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.371904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.372106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.372117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.372398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.372408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.372531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.372541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.372696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.372707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.372883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.372894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.373209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.373220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.373522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.373533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.373839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.373849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.374181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.374192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.374523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.374534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.374845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.240 [2024-11-06 11:11:15.374856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.240 qpair failed and we were unable to recover it. 00:29:24.240 [2024-11-06 11:11:15.375159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.375170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.375357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.375368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.375697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.375708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.375893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.375906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.376223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.376233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.376505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.376515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.376824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.376835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.377161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.377171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.377474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.377484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.377759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.377770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.378072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.378082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.378412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.378422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.378610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.378622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.378956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.378967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.379291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.379302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.379636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.379649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.379981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.379993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.380332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.380342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.380645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.380656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.380986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.380997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.381332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.381343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.381531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.381542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.381714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.381726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.382014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.382026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.382358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.241 [2024-11-06 11:11:15.382369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.241 qpair failed and we were unable to recover it. 00:29:24.241 [2024-11-06 11:11:15.382705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.382716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.383028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.383038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.383347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.383358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.383659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.383670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.383846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.383857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.384197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.384208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.384513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.384524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.384715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.384726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.385044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.385055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.385299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.385310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.385615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.385626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.385903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.385914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.386204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.386215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.386523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.386535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.386838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.386849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.387151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.387161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.387435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.387447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.387670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.387681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.387857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.387869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.388190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.388201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.388495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.388506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.388810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.388821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.389142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.389153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.389453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.242 [2024-11-06 11:11:15.389464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.242 qpair failed and we were unable to recover it. 00:29:24.242 [2024-11-06 11:11:15.389796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.389808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.390138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.390149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.390323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.390334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.390644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.390654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.390829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.390840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.391151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.391161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.391469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.391480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.391787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.391798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.392120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.392130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.392435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.392446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.392760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.392772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.393067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.393078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.393363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.393375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.393709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.393720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.394018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.394029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.394330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.394341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.394678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.394688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.394990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.395000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.395183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.395194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.395521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.395532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.395862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.395872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.396199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.396209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.396514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.396525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.396834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.396845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.397060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.397070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.397370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.397381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.397683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-11-06 11:11:15.397693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-11-06 11:11:15.398005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.398016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.398190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.398201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.398507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.398518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.398821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.398832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.399131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.399142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.399432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.399442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.399709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.399720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.400056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.400072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.400377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.400387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.400718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.400729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.401038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.401049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.401239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.401250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.401555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.401566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.401904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.401914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.402240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.402252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.402544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.402555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.402770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.402781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.403107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.403118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.403459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.403470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.403813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.403825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.404022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.404033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.404390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.404400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.404707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.404719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.405033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.405044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.405234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.405245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.405586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.405597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-11-06 11:11:15.405919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-11-06 11:11:15.405930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.406238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.406249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.406555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.406566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.406790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.406801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.407113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.407123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.407425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.407436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.407740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.407755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.408062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.408072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.408244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.408258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.408562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.408573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.408882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.408893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.409172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.409183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.409490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.409501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.409811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.409823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.410140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.410150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.410342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.410353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.410543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.410554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.410732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.410743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.410999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.411010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.411336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.411348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.411657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.411668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.411984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.411995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.412185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.412197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.412526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.412536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.412713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.412724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.413023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.413035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.413337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.413347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-11-06 11:11:15.413649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-11-06 11:11:15.413659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.413923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.413934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.414238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.414248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.414579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.414590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.414864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.414875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.415139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.415151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.415464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.415476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.415778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.415790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.416070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.416083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.416363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.416375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.416638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.416649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.416961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.416972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.417174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.417185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.417487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.417498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.417811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.417822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.418002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.418014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.418326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.418338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.418525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.418537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.418721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.418732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.419063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.419075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.419232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.419243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.419556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.419567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.419872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.419884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.420212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.420225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.420557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.420569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.420876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.420887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.421087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-11-06 11:11:15.421098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-11-06 11:11:15.421430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.421442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.421628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.421640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.421940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.421951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.422266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.422277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.422579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.422591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.422914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.422924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.423098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.423109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.423282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.423293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.423620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.423631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.423971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.423982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.424165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.424177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.424497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.424507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.424808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.424819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.425000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.425011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.425286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.425297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.425606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.425616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.425800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.425812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.426004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.426014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.426312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.426323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.426473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.426484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.426671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.426681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.426994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.427006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.427346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.427358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.427659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.427670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.427978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.427989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.428266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.428276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.428477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.428487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.428788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.428799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.429107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.429119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.429458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.429469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.429773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.429784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.430120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.430131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.430409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.430420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.430589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.430601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.430787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.430799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.430971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-11-06 11:11:15.430983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-11-06 11:11:15.431146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.431158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.431352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.431363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.431652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.431663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.431937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.431949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.432272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.432283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.432580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.432590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.432852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.432864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.433128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.433139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.433451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.433462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.433660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.433672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.433906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.433919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.434070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.434080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.434425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.434436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.434738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.434755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.434943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.434954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.435092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.435103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.435571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.435679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdb4000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.436068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.436161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdb4000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.436452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.436490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdb4000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.436677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.436692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.436871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.436881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.437228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.437238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.437548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.437559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.437903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.437914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.438245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.438256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.438597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.438608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.438797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.438809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.439111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.439121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.439474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.439486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.439766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.439777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.440107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.440119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.440351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.440361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.440527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.440539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.440839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-11-06 11:11:15.440850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-11-06 11:11:15.441164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.441176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.441472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.441482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.441795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.441806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.442153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.442164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.442470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.442481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.442795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.442807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.443140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.443153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.443493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.443505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.443814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.443825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.444158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.444169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.444472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.444482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.444807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.444819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.445150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.445161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.445497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.445508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.445814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.445826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.446032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.446043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.446346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.446356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.446663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.446674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.446915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.446926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.447211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.447222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.447555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.447567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.447799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.447810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.448181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.448193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.448534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.448546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.448870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.448880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.449203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.449214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.449542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.449553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.449741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.449756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.449981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.449991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.450313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.450323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.450629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.450639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.450947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.450958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.451260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.451272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.451605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.451618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.451938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.451949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.452242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.452252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.452563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.452574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.452913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.452924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.453226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-11-06 11:11:15.453236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-11-06 11:11:15.453509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.453520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.453774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.453786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.454071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.454081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.454385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.454396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.454669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.454680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.455000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.455010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.455313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.455323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.455632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.455643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.455947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.455959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.456274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.456284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.456592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.456603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.456917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.456928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.457269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.457280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.457603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.457613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.457946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.457957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.458260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.458271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.458485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.458496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.458844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.458856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.459137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.459148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.459456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.459467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.459762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.459774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.460095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.460105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.460442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.460453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.460767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.460778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.461092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.461103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.461291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.461303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.461608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.461619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.461937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.461948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.462235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.462246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.462585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.462597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.462905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.462917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.463180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.463191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.463525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.463535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.463868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.463879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.464205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.464216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.464543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.464556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.464853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.464865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.465039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.465050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.465328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-11-06 11:11:15.465339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-11-06 11:11:15.465679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.465690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.465837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.465848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.466167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.466178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.466480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.466491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.466804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.466815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.466990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.467001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.467271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.467282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.467594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.467604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.467907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.467918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.468214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.468225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.468554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.468565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.468899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.468911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.469219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.469230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.469511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.469522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.469694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.469706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.469977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.469988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.470294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.470304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.470591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.470602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.470914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.470925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.471232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.471243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.471545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.471558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.471874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.471885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.472147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.472158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.472488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.472501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.472720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.472732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.473028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.473040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.473351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.473361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.473671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.473682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.473967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.473978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.474251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.474262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.474522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.474532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.474861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.474872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.475197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.475208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.475579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.475590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.475907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.475918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.476217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.476228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.476532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.476543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.476730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.476741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.477026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-11-06 11:11:15.477037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-11-06 11:11:15.477088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.477098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.477321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.477331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.477520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.477530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.477700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.477711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.477893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.477905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.478249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.478260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.478504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.478515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.478807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.478817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.478916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.478926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.479086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.479096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.479274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.479286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.479595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.479608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.479918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.479929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.480102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.480114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.480454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.480465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.480650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.480661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.480835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.480845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.481047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.481057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.481401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.481411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.481717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.481727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.482069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.482081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.482409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.482419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.482472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.482481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.482649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.482659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.483009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.483020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.483245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.483256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.483589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.483600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.483875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.483886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.484218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.484228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.484553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.484564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.484869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.484880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.485070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.485081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.485359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.485370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.485687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.485698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-11-06 11:11:15.485883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-11-06 11:11:15.485894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.486172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.486182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.486482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.486492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.486716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.486726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.487036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.487047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.487307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.487318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.487486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.487497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.487675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.487687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.487888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.487899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.488094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.488105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.488427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.488437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.488658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.488669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.488833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.488845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.489038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.489049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.489366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.489377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.489508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.489519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.489830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.489841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.490162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.490174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.490475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.490486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.490766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.490778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.490961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.490972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.491299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.491310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.491654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.491664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.491868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.491880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.492211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.492222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.492445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.492456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.492775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.492786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.493128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.493139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.493470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.493481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.493820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.493831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.494164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.494174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.494504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.494514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.494821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.494833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.495218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.495229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.495413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.495425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.495592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.495602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.495753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.495764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.496093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.496103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.496429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.496439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-11-06 11:11:15.496810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-11-06 11:11:15.496821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.497124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.497135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.497440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.497451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.497756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.497767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.497921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.497932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.498233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.498243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.498587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.498600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.498845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.498856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.499034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.499045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.499304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.499315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.499443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.499453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.499832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.499927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdb4000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.500182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.500221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdb4000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.500315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.500345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbdb4000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.500666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.500679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.500914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.500926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.501179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.501190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.501480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.501491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.501668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.501680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.501918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.501929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.502165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.502175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.502363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.502373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.502716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.502728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.503036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.503047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.503241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.503253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.503433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.503444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.503753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.503765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.503950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.503961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.504135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.504147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.504485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.504496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.504682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.504693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.504740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.504755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.504917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.504928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.505224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.505238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.505566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.505577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.505876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.505887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.506193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.506204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.506486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.506496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.506807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.506817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-11-06 11:11:15.507157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-11-06 11:11:15.507168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.507499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.507510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.507836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.507847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.508031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.508042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.508363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.508374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.508627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.508638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.508807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.508818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.509099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.509110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.509306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.509319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.509604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.509614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.509799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.509811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.510046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.510058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.510240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.510252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.510344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.510356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.510433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.510443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.510708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.510718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.510942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.510954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.511287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.511298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.511599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.511609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.511803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.511815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.511864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.511876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.512231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.512244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.512430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.512442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.512800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.512811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.513187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.513197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.513504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.513515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.513812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.513823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.514002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.514013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.514291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.514301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.514483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.514495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.514679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.514689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.515005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.515016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.515322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.515333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.515705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.515716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.515939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.515950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.516249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.516260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.516595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.516606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.516768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.516780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.517093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-11-06 11:11:15.517104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-11-06 11:11:15.517152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.517162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.517464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.517727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.517739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.517921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.517932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.518207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.518217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.518559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.518570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.518874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.518885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.519047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.519059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.519373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.519384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.519575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.519586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.519909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.519920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.520084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.520095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.520331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.520341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.520673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.520684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.520871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.520882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.521212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.521223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.521527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.521538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.521809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.521820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.522083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.522094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.522260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.522272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.522597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.522607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.522772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.522782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.523091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.523101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.523273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.523284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.523618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.523630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.523968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.523978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.524283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.524294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.524632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.524643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.524820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.524832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.525166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.525176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.525225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.525234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.525529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.525539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.525831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.525843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.526185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.526196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.526377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.526389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.526696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.526707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.527041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.527052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.527237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.527248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-11-06 11:11:15.527414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-11-06 11:11:15.527426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.527736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.527750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.527949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.527960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.528215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.528229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.528512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.528523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.528889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.528900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.529055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.529067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.529412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.529422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.529610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.529620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.529812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.529824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.530039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.530049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.530382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.530393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.530696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.530711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.530894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.530907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.531243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.531254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.531432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.531445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.531769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.531780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.532117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.532128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.532436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.532448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.532573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.532584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.532755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.532766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.533071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.533082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.533392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.533404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.533707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.533718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.533904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.533915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.534229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.534240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.534421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.534432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.534760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.534772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.535171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.535182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.535508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.535519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.535839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.535851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.536186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.536197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.536527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.536538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.536878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.536889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.537190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-11-06 11:11:15.537201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-11-06 11:11:15.537536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.537548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.537918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.537930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.538207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.538218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.538412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.538423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.538768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.538782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.539102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.539113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.539389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.539400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.539706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.539716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.539991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.540002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.540220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.540231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.540549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.540560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.540871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.541226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.541237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.541546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.541558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.541834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.541846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.542176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.542188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.542472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.542482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.542789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.542800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.543108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.543120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.543426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.543438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.543740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.543757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.544054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.544065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.544406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.544417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.544722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.544733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.545066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.545078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.545385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.545397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.545676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.545687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.546021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.546032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.546363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.546374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.546696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.546708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.547038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.547050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.547381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.547394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.547726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.547737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.548082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.548093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.548412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.548423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.548753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.548765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-11-06 11:11:15.549096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-11-06 11:11:15.549108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.549438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.549449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.549724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.549736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.550043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.550054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.550393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.550404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.550666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.550677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.550976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.550988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.551170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.551181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.551507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.551518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.551717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.551729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.551927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.551940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.552247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.552258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.552566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.552578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.552883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.552895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.553186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.553197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.553371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.553383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.553664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.553675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.554019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.554031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.554316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.554328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.554630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.554642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.554821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.554832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.555155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.555167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.555508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.555520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.555856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.555867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.556186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.556197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.556494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.556505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.556850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.556861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.557194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.557206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.557516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.557527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.557831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.557843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.558194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.558205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.558380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.558391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.558716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.558727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.559061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.559073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.559406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.559417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.559709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.559722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.560037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.560051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.560362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.560374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.560661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.259 [2024-11-06 11:11:15.560673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.259 qpair failed and we were unable to recover it. 00:29:24.259 [2024-11-06 11:11:15.560891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.560903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.561268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.561279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.561583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.561595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.561894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.561905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.562234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.562245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.562515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.562527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.562844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.562855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.563197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.563208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.563492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.563504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.563812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.563825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.564037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.564047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.564397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.564409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.564716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.564728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.564903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.564914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.565247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.565259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.565546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.565557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.565870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.565881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.566057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.566068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.566414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.566425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.566713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.566725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.567034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.567048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.567229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.567241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.567554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.567565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.567897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.567909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.568213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.568227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.568558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.568569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.568875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.568886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.569182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.569193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.569502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.569817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.569829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.570156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.570169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.570356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.570369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.570705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.570716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.571017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.571029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.571220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.571231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.571531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.571542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.571850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.571862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.572200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.572211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.260 qpair failed and we were unable to recover it. 00:29:24.260 [2024-11-06 11:11:15.572521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.260 [2024-11-06 11:11:15.572533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.572852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.572863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.573185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.573196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.573499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.573510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.573808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.573819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.574004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.574016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.574343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.574354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.574535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.574548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.574880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.574892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.575224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.575235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.575454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.575466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.575770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.575783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.576115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.576126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.576309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.576324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.576504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.576516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.576809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.576820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.577129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.577140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.577342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.577353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.577537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.577548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.577847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.577859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.578025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.578037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.578353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.578364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.578700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.578711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.578917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.578929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.579251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.579263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.579558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.579570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.579877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.579888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.580211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.580222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.580524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.580535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.580722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.580734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.581037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.581049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.581350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.581361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.581517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.581528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.581720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.581730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.582047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.261 [2024-11-06 11:11:15.582059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.261 qpair failed and we were unable to recover it. 00:29:24.261 [2024-11-06 11:11:15.582369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.582380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.582568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.582579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.582879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.582891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.583069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.583082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.583395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.583406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.583673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.583685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.583990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.584002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.584335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.584347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.584541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.584553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.584715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.584726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.585042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.585053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.585388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.585399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.585588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.585599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.585791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.585804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.586011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.586023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.586332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.586343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.586648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.586660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.586830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.587031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.587042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.587227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.587237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.587432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.587443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.587787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.587799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.587980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.587992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.588181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.588192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.588243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.588253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.588439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.588450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.588616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.588628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.588946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.588957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.589007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.589017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.589327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.589338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.589529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.589540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.589719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.589730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.589960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.589971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.590277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.590288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.590569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.590580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.590808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.590819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.591009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.591218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.591229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.591528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.262 [2024-11-06 11:11:15.591539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.262 qpair failed and we were unable to recover it. 00:29:24.262 [2024-11-06 11:11:15.591853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.591864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.592195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.592207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.592391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.592401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.592674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.592685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.592979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.592991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.593330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.593341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.593532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.593544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.593719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.593733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.593901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.593912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.594235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.594246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.594549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.594559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.594752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.594764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.595090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.595101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.595401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.595411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.595736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.595751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.596016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.596027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.596212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.596224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.596419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.596429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.596709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.596719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.597007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.597018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.597328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.597339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.597680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.597692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.597969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.597981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.598303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.598314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.598519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.598529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.598708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.598719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.599008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.599019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.599236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.599247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.599576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.599586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.599929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.599940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.600134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.600145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.600314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.600325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.600647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.600659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.600710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.600721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.600909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.600923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.601223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.601234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.601466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.601477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.601786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.601797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.263 [2024-11-06 11:11:15.602096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.263 [2024-11-06 11:11:15.602107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.263 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.602409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.602420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.602701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.602712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.602928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.603271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.603281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.603470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.603481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.603790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.603801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.604124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.604135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.604422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.604433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.604646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.604657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.604921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.604933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.605234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.605244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.605518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.605529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.605833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.605845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.606123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.606134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.606434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.606445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.606742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.606756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.607046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.607057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.607371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.607382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.607463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.607472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.607740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.607753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.608092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.608103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.608410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.608421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.608724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.608735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.609064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.609075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.609294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.609305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.609611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.609622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.609948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.609959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.610274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.610285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.610581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.610592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.610906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.610917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.611238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.611249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.611555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.611565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.611840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.611852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.612176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.612187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.612526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.612537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.612811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.612822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.613158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.613169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.613480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.613491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.264 [2024-11-06 11:11:15.613819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.264 [2024-11-06 11:11:15.613831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.264 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.614141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.614152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.614431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.614441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.614626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.614637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.614918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.614928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.615235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.615245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.615605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.615615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.615912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.615923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.616245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.616255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.616559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.616569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.616758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.616770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.617089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.617099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.617363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.617374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.617679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.617690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.618071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.618082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.618414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.618424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.618757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.618769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.619070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.619085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.619428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.619439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.619753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.619764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.620074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.620085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.620413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.620423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.620729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.620740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.621070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.621081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.621418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.621429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.621685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.621698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.621771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.621780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.622139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.622149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.622457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.622467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.622778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.622789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.623057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.623068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.623258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.623269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.623466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.623476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.623637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.623648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.623953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.623964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.624286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.624297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.624561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.624571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.624834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.624845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 [2024-11-06 11:11:15.625136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.265 [2024-11-06 11:11:15.625146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.265 qpair failed and we were unable to recover it. 00:29:24.265 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.265 [2024-11-06 11:11:15.625442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.625455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:24.266 [2024-11-06 11:11:15.625758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.625769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.266 [2024-11-06 11:11:15.626067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.626078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.266 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.266 [2024-11-06 11:11:15.626383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.626394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.626668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.626679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.626996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.627007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.627337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.627349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.627682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.627694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.627993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.628004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.628342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.628353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.628666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.628677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.628904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.628917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.629234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.629245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.629553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.629566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.629785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.629796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.630105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.630115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.630388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.630400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.630713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.630725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.631043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.631055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.631386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.631398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.631738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.631753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.632051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.632062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.632260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.632270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.632549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.632561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.632859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.632870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.633183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.633194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.633508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.633520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.633816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.633829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.634044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.634056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.634384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.634395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.266 [2024-11-06 11:11:15.634701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.266 [2024-11-06 11:11:15.634711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.266 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.635048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.635060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.635345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.635357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.635662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.635674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.635994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.636005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.636341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.636353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.636640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.636652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.636986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.636998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.637154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.637166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.637475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.637486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.637761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.637772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.637955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.637967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.638225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.638236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.638429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.638440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.638635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.638646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.638697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.638707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.638995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.639006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.639282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.639293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.267 [2024-11-06 11:11:15.639443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.267 [2024-11-06 11:11:15.639454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.267 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.639731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.639744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.640042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.640054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.640253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.640265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.640543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.640555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.640861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.640872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.641118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.641130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.641497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.641507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.641695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.641706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.641990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.642001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.642299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.642312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.642500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.642512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.642808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.642819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.643203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.643215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.643434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.643445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.643636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.643647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.643969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.643980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.644181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.644197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.644525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.644535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.644809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.644821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.645125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.645136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.645281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.645292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.645483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.645494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.645799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.645811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.646139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.646150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.646481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.646492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.646676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.646687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.646984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.646995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.647172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.647185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.647506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.647518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.647683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.647694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.647869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.647880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.648052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.648064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-11-06 11:11:15.648361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-11-06 11:11:15.648372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.648713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.648724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.649067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.649078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.649468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.649479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.649793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.649806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.650111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.650122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.650429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.650441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.650628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.650640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.650819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.650831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.651111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.651122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.651423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.651434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.651740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.651755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.652064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.652075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.652365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.652376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.652643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.652653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.652847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.652859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.653186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.653197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.653389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.653399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.653568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.653579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.653968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.653981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.654266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.654277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.654441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.654452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.654772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.654783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.655157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.655168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.655355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.655367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.655684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.655695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.655875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.655886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.656166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.656177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.656463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.656474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.656661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.656672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.656881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.656893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.657097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.657108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.657442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.657454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.657760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-11-06 11:11:15.657771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-11-06 11:11:15.658059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.658070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.658373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.658384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.658569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.658581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.658912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.658924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.659105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.659117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.659296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.659307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.659615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.659626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.659971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.659983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.660283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.660295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.660480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.660491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.660803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.660815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.660999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.661010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.661332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.661343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.661650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.661661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.661985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.661996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.662322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.662333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.662517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.662528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.662841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.662852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.663178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.663193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.663495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.663505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.663812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.663823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.664130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.664143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.664318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.664329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.664597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.664609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.664794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.664806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.665102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.665113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.665482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.665493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.665799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.665811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.666140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.666151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.666454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.666465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.666778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.666790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.537 [2024-11-06 11:11:15.667116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.667130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.537 [2024-11-06 11:11:15.667462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.667475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.537 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.537 [2024-11-06 11:11:15.667815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.667828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.668000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.668012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.668338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.668348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-11-06 11:11:15.668681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-11-06 11:11:15.668693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.668996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.669007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.669347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.669358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.669662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.669672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.669992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.670003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.670194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.670205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.670506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.670517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.670831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.670844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.671167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.671178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.671481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.671492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.671672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.671683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.671992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.672003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.672305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.672316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.672622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.672633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.672957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.672968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.673286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.673297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.673606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.673617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.673913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.673925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.674266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.674277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.674580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.674592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.674905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.674917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.675224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.675235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.675512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.675523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.675831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.675842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.676139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.676149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.676334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.676345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.676635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.676646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.677038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.677049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.677352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.677362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.677678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.677689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.677986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.677997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.678274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.678284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.678599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.678611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.678916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.678926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.679209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-11-06 11:11:15.679220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-11-06 11:11:15.679546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.679557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.679863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.679874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.680177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.680187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.680373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.680385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.680720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.680732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.681077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.681087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.681398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.681409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.681751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.681762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.682027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.682037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.682343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.682353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.682669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.682679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.682985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.682997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.683331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.683341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.683526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.683538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.683860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.683872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.684213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.684225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.684525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.684536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.684870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.684882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.685201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.685212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.685390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.685401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.685734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.685744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.686084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.686095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.686429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.686441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.686776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.686787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.687113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.687124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.687433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.687443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.687748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.687760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.688091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.688102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.688408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.688419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.688721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.688731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.689065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.689076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.689370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-11-06 11:11:15.689702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-11-06 11:11:15.689712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.690022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.690032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.690340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.690351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.690638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.690649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.690958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.690970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.691163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.691480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.691490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.691831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.691842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.692145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.692160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.692475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.692486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.692811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.692823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.693130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.693140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.693324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.693335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.693652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.693662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.693834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.693845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.694139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.694149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.694340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.694353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.694666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.694676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.694851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.694862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.695031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.695042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.695364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.695375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.695681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.695692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.695991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.696003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.696342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.696353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.696528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.696539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.696826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.696837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.697163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.697174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.697465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.697477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.697785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.697799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.698137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.698150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.698483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.698494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.698827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.698838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.699177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.699188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-11-06 11:11:15.699496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-11-06 11:11:15.699507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.699839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.699850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.700125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.700138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.700361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.700698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.700709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.701079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.701092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.701433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.701444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.701755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.701767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.701828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.701841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.702098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.702109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.702453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.702464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 Malloc0 00:29:24.541 [2024-11-06 11:11:15.702778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.702789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.703069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.703079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.541 [2024-11-06 11:11:15.703382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.703394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.703593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.703604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:24.541 [2024-11-06 11:11:15.703839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.703851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.704032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.704043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.541 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.541 [2024-11-06 11:11:15.704368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.704379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.704781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.704793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.704978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.704989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.705288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.705300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.705517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.705529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.705867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.705878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.706223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.706234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.706412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.706424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.706731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.706742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.707063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.707073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.707412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.707423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.707728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.707739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.708061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.708072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.708263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.541 [2024-11-06 11:11:15.708275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-11-06 11:11:15.708322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.708334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.708642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.708652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.708813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.708824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.709030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.709041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.709351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.709363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.709707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.709719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.710040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.710051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.710154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.542 [2024-11-06 11:11:15.710229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.710240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.710570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.710582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.710847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.710858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.711042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.711053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.711323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.711333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.711498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.711508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.711825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.711835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.712169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.712180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.712485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.712496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.712811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.712822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.713147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.713157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.713465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.713477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.713664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.713675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.713962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.713973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.714132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.714142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.714328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.714338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.714672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.714685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.714976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.714987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.715268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.715279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.715596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.715607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.715793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.715805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.715943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.715955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.716197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.716207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.716555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.716566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.716795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.716806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.717098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.717108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.717308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.542 [2024-11-06 11:11:15.717319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.542 qpair failed and we were unable to recover it. 00:29:24.542 [2024-11-06 11:11:15.717517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.717528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.717829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.717840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.718041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.718051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.718372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.718383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.718681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.718691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.718993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.719003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.543 [2024-11-06 11:11:15.719318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.719330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.543 [2024-11-06 11:11:15.719641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.719652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.543 [2024-11-06 11:11:15.719940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.719951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.543 [2024-11-06 11:11:15.720119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.720131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.720323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.720333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.720602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.720613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.720664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.720676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.720958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.720969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.721337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.721348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.721540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.721551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.721809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.721820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.722012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.722023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.722349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.722359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.722666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.722677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.723018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.723029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.723225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.723237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.723426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.723436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.723615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.723626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.723833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.723844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.723933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.723943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.724246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.724554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.724564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.724827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.724841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.725038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.725049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.725384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.725395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.725706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.725717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.725897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.725909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.726097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.726108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.726426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.726437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.726596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.726607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.726944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.543 [2024-11-06 11:11:15.726956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.543 qpair failed and we were unable to recover it. 00:29:24.543 [2024-11-06 11:11:15.727255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.727267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.727530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.727541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.727735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.728035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.728046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.728353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.728364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.728577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.728588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.728740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.728755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.728916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.728928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.729097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.729109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.729405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.729416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.729741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.729764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.730095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.730106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.730444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.730456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.730618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.730630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.730916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.730927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.730982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.730992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.544 [2024-11-06 11:11:15.731166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.731177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.731359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.731371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.544 [2024-11-06 11:11:15.731709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.731722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.544 [2024-11-06 11:11:15.732052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.732064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.544 [2024-11-06 11:11:15.732295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.732306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.732609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.732620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.732961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.732972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.733289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.733300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.733587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.733598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.733938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.733950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.734254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.734266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.734597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.734608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.734793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.734805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.735130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.735141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.735482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.735493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.735799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.735810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.544 qpair failed and we were unable to recover it. 00:29:24.544 [2024-11-06 11:11:15.736175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.544 [2024-11-06 11:11:15.736186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.736492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.736503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.736779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.736790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.737096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.737106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.737387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.737398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.737660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.737672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.737973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.737984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.738300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.738312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.738613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.738624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.738928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.738939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.739232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.739243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.739544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.739555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.739900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.739911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.740257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.740267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.740545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.740556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.740867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.740879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.741207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.741218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.741520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.741531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.741808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.741819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.742127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.742138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.742443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.742453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.742784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.742795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.743036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.743046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.545 [2024-11-06 11:11:15.743351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.743363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.545 [2024-11-06 11:11:15.743673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.743685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.545 [2024-11-06 11:11:15.744059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.744070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.545 [2024-11-06 11:11:15.744406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.744417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.744749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.744760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.745045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.745056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.745225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.745236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.745580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.745590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.745769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.745780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.745977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.745987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.746316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.746327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.746613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.746624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.746969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.746981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.545 [2024-11-06 11:11:15.747315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.545 [2024-11-06 11:11:15.747326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.545 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.747661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.747673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.747983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.747994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.748275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.748285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.748550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.748561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.748864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.748874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.749211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.749221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.749525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.749538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.749865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.749875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.750182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.546 [2024-11-06 11:11:15.750193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20af0c0 with addr=10.0.0.2, port=4420 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.750381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.546 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.546 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.546 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.546 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.546 [2024-11-06 11:11:15.761217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.761333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.761352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.761360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.761371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.761391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.546 11:11:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3444418 00:29:24.546 [2024-11-06 11:11:15.770984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.771063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.771078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.771085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.771092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.771107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.780991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.781082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.781097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.781104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.781111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.781125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.791017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.791095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.791110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.791117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.791124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.791138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.801023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.801092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.801105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.801112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.801119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.801139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.810942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.811001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.811014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.811021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.811028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.811042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.821031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.821083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.821096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.821104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.821111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.821125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.830940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.830998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.831012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.831019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.831026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.831039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.841119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.546 [2024-11-06 11:11:15.841179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.546 [2024-11-06 11:11:15.841192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.546 [2024-11-06 11:11:15.841199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.546 [2024-11-06 11:11:15.841206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.546 [2024-11-06 11:11:15.841219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.546 qpair failed and we were unable to recover it. 00:29:24.546 [2024-11-06 11:11:15.851108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.851167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.851181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.851188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.851194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.851208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.861136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.861188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.861202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.861210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.861216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.861230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.871139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.871197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.871210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.871218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.871224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.871237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.881142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.881227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.881240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.881248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.881254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.881268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.891190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.891277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.891294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.891301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.891308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.891322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.901229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.901279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.901292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.901300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.901306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.901320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.911250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.911310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.911323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.911331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.911338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.911352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.921360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.921419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.921433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.921440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.921447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.921460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.931278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.931329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.931342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.931350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.931356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.931373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.547 [2024-11-06 11:11:15.941339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.547 [2024-11-06 11:11:15.941395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.547 [2024-11-06 11:11:15.941409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.547 [2024-11-06 11:11:15.941416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.547 [2024-11-06 11:11:15.941423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.547 [2024-11-06 11:11:15.941436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.547 qpair failed and we were unable to recover it. 00:29:24.809 [2024-11-06 11:11:15.951360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.809 [2024-11-06 11:11:15.951420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.809 [2024-11-06 11:11:15.951434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.809 [2024-11-06 11:11:15.951441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.809 [2024-11-06 11:11:15.951448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.809 [2024-11-06 11:11:15.951462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.809 qpair failed and we were unable to recover it. 00:29:24.809 [2024-11-06 11:11:15.961449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.809 [2024-11-06 11:11:15.961543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.809 [2024-11-06 11:11:15.961557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.809 [2024-11-06 11:11:15.961565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.809 [2024-11-06 11:11:15.961572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.809 [2024-11-06 11:11:15.961586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.809 qpair failed and we were unable to recover it. 00:29:24.809 [2024-11-06 11:11:15.971402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.809 [2024-11-06 11:11:15.971511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.809 [2024-11-06 11:11:15.971538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.809 [2024-11-06 11:11:15.971547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.809 [2024-11-06 11:11:15.971555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.809 [2024-11-06 11:11:15.971575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.809 qpair failed and we were unable to recover it. 00:29:24.809 [2024-11-06 11:11:15.981539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.809 [2024-11-06 11:11:15.981609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:15.981634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:15.981643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:15.981650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:15.981670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:15.991494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:15.991548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:15.991564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:15.991572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:15.991579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:15.991594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.001629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.001738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.001756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.001763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.001771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.001786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.011560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.011611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.011625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.011633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.011639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.011654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.021533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.021613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.021631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.021638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.021645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.021659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.031475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.031564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.031578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.031585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.031591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.031606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.041657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.041720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.041734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.041741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.041752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.041767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.051569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.051624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.051638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.051646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.051652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.051666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.061632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.061688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.061702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.061709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.061716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.061734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.071555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.071610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.071623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.071631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.071638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.071652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.081768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.081841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.081855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.081863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.081869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.081883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.091725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.091783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.091799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.091808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.091815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.091830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.101727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.101790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.101804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.101811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.810 [2024-11-06 11:11:16.101818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.810 [2024-11-06 11:11:16.101832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.810 qpair failed and we were unable to recover it. 00:29:24.810 [2024-11-06 11:11:16.111768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.810 [2024-11-06 11:11:16.111825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.810 [2024-11-06 11:11:16.111839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.810 [2024-11-06 11:11:16.111847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.111853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.111868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.121855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.121952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.121966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.121973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.121980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.121995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.131827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.131880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.131894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.131901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.131908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.131922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.141855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.141913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.141927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.141935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.141941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.141956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.151869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.151933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.151950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.151958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.151965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.151979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.162005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.162099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.162114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.162121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.162128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.162142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.171956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.172011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.172025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.172032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.172039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.172053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.181976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.182034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.182048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.182056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.182062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.182076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.192017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.192072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.192085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.192093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.192103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.192117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.202106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.202171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.202184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.202192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.202198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.202212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.212031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.212081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.212094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.212101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.212108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.212121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:24.811 [2024-11-06 11:11:16.222095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.811 [2024-11-06 11:11:16.222156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.811 [2024-11-06 11:11:16.222169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.811 [2024-11-06 11:11:16.222177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.811 [2024-11-06 11:11:16.222183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:24.811 [2024-11-06 11:11:16.222197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.811 qpair failed and we were unable to recover it. 00:29:25.074 [2024-11-06 11:11:16.232102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.074 [2024-11-06 11:11:16.232157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.074 [2024-11-06 11:11:16.232171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.074 [2024-11-06 11:11:16.232179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.074 [2024-11-06 11:11:16.232186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.074 [2024-11-06 11:11:16.232199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-11-06 11:11:16.242190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.074 [2024-11-06 11:11:16.242252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.074 [2024-11-06 11:11:16.242266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.074 [2024-11-06 11:11:16.242273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.074 [2024-11-06 11:11:16.242280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.074 [2024-11-06 11:11:16.242294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-11-06 11:11:16.252172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.074 [2024-11-06 11:11:16.252228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.074 [2024-11-06 11:11:16.252241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.074 [2024-11-06 11:11:16.252249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.074 [2024-11-06 11:11:16.252255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.074 [2024-11-06 11:11:16.252269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-11-06 11:11:16.262232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.262286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.262299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.262307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.262313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.262327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.272252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.272311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.272325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.272332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.272339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.272354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.282265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.282319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.282335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.282343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.282350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.282363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.292312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.292373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.292387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.292394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.292401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.292414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.302354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.302414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.302427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.302434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.302441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.302455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.312365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.312462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.312476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.312484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.312491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.312505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.322383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.322438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.322451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.322459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.322469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.322483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.332406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.332456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.332470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.332477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.332484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.332497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.342427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.342485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.342499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.342506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.342513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.342527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.352474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.352576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.352590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.352597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.352604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.352618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.362504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.362557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.362571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.362578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.362585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.362599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.372531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.372588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.372601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.372609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.372616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.372629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.382521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.075 [2024-11-06 11:11:16.382574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.075 [2024-11-06 11:11:16.382588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.075 [2024-11-06 11:11:16.382595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.075 [2024-11-06 11:11:16.382602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.075 [2024-11-06 11:11:16.382616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-11-06 11:11:16.392590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.392650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.392664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.392672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.392679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.392693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.402612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.402664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.402678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.402685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.402692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.402705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.412648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.412701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.412719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.412726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.412732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.412750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.422685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.422738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.422755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.422763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.422769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.422783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.432583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.432645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.432659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.432666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.432673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.432686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.442761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.442864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.442878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.442885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.442892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.442906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.452755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.452819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.452833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.452841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.452855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.452869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.462776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.462832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.462845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.462852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.462859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.462873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.472779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.472839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.472852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.472859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.472866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.472880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.482817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.482880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.482894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.482901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.482908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.482922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-11-06 11:11:16.492866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.076 [2024-11-06 11:11:16.492921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.076 [2024-11-06 11:11:16.492935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.076 [2024-11-06 11:11:16.492942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.076 [2024-11-06 11:11:16.492949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.076 [2024-11-06 11:11:16.492963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.502778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.502871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.502886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.502894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.502901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.502915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.512894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.512958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.512973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.512981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.512988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.513001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.522934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.523017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.523030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.523037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.523045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.523059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.532861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.532920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.532934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.532941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.532948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.532962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.542965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.543018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.543035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.543042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.543049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.543063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.553048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.553112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.553126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.553133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.553140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.553154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.563039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.563093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.563106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.563114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.563120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.563134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.573062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.573122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.573136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.573144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.573150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.573164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.583126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.339 [2024-11-06 11:11:16.583227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.339 [2024-11-06 11:11:16.583241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.339 [2024-11-06 11:11:16.583249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.339 [2024-11-06 11:11:16.583258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.339 [2024-11-06 11:11:16.583273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.339 qpair failed and we were unable to recover it. 00:29:25.339 [2024-11-06 11:11:16.593148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.593203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.593216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.593224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.593231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.593244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.603175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.603227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.603241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.603248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.603255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.603269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.613190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.613251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.613264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.613272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.613279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.613293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.623103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.623155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.623169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.623176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.623183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.623197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.633254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.633306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.633320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.633327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.633334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.633347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.643202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.643261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.643275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.643283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.643289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.643303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.653287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.653347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.653361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.653368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.653375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.653389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.663322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.663378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.663391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.663398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.663405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.663419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.673351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.673437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.673454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.673462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.673469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.673483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.683390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.683444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.683458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.683465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.683472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.683486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.693421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.693474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.693488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.693495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.693501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.693515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.703432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.703484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.703500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.703507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.703514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.703529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.713462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.713523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.713537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.340 [2024-11-06 11:11:16.713545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.340 [2024-11-06 11:11:16.713555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.340 [2024-11-06 11:11:16.713569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.340 qpair failed and we were unable to recover it. 00:29:25.340 [2024-11-06 11:11:16.723506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.340 [2024-11-06 11:11:16.723567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.340 [2024-11-06 11:11:16.723593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.341 [2024-11-06 11:11:16.723602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.341 [2024-11-06 11:11:16.723609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.341 [2024-11-06 11:11:16.723630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.341 qpair failed and we were unable to recover it. 00:29:25.341 [2024-11-06 11:11:16.733529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.341 [2024-11-06 11:11:16.733585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.341 [2024-11-06 11:11:16.733600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.341 [2024-11-06 11:11:16.733608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.341 [2024-11-06 11:11:16.733615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.341 [2024-11-06 11:11:16.733630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.341 qpair failed and we were unable to recover it. 00:29:25.341 [2024-11-06 11:11:16.743535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.341 [2024-11-06 11:11:16.743592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.341 [2024-11-06 11:11:16.743607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.341 [2024-11-06 11:11:16.743615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.341 [2024-11-06 11:11:16.743621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.341 [2024-11-06 11:11:16.743636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.341 qpair failed and we were unable to recover it. 00:29:25.341 [2024-11-06 11:11:16.753574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.341 [2024-11-06 11:11:16.753633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.341 [2024-11-06 11:11:16.753647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.341 [2024-11-06 11:11:16.753654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.341 [2024-11-06 11:11:16.753661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.341 [2024-11-06 11:11:16.753675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.341 qpair failed and we were unable to recover it. 00:29:25.603 [2024-11-06 11:11:16.763618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.603 [2024-11-06 11:11:16.763672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.603 [2024-11-06 11:11:16.763686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.603 [2024-11-06 11:11:16.763693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.603 [2024-11-06 11:11:16.763700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.603 [2024-11-06 11:11:16.763714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.603 qpair failed and we were unable to recover it. 00:29:25.603 [2024-11-06 11:11:16.773608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.603 [2024-11-06 11:11:16.773696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.603 [2024-11-06 11:11:16.773709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.773718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.773725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.773739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.783661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.783745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.783763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.783771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.783778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.783793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.793698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.793757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.793771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.793779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.793785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.793800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.803716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.803831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.803849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.803857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.803863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.803878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.813731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.813785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.813799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.813806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.813813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.813826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.823768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.823819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.823833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.823841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.823847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.823861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.833786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.833855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.833868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.833876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.833882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.833896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.843880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.843934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.843948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.843955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.843965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.843979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.853840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.853896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.853909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.853916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.853923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.853937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.863879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.863938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.863951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.863958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.863965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.863979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.873905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.873960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.873973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.873980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.873987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.874001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.883909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.883961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.883975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.883982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.883989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.884003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.604 [2024-11-06 11:11:16.893842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.604 [2024-11-06 11:11:16.893903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.604 [2024-11-06 11:11:16.893918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.604 [2024-11-06 11:11:16.893925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.604 [2024-11-06 11:11:16.893932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.604 [2024-11-06 11:11:16.893947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.604 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.903982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.904041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.904055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.904062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.904068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.904082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.914022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.914075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.914088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.914095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.914102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.914116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.924074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.924130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.924144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.924151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.924157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.924171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.934052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.934112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.934129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.934136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.934143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.934157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.944103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.944159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.944172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.944179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.944186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.944200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.954129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.954188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.954201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.954208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.954215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.954229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.964179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.964273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.964286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.964294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.964300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.964314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.974204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.974257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.974270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.974277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.974287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.974301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.984200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.984249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.984262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.984270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.984276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.984289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:16.994246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:16.994300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:16.994314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:16.994321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:16.994328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:16.994341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:17.004272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:17.004360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:17.004373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:17.004381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:17.004387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:17.004401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.605 [2024-11-06 11:11:17.014286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.605 [2024-11-06 11:11:17.014337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.605 [2024-11-06 11:11:17.014350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.605 [2024-11-06 11:11:17.014357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.605 [2024-11-06 11:11:17.014364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.605 [2024-11-06 11:11:17.014378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.605 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.024326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.024383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.024396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.024404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.024410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.024425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.034230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.034325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.034338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.034345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.034352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.034366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.044268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.044327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.044340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.044348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.044354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.044368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.054396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.054447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.054460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.054468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.054474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.054488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.064413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.064475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.064505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.064514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.064521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.064540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.074472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.074531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.074556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.074565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.074572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.074592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.084496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.084589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.084615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.084623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.084631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.084650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.094504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.094556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.094572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.094580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.868 [2024-11-06 11:11:17.094586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.868 [2024-11-06 11:11:17.094601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.868 qpair failed and we were unable to recover it. 00:29:25.868 [2024-11-06 11:11:17.104499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.868 [2024-11-06 11:11:17.104559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.868 [2024-11-06 11:11:17.104573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.868 [2024-11-06 11:11:17.104580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.104595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.104610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.114577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.114637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.114652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.114659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.114666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.114680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.124607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.124664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.124678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.124685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.124692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.124706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.134618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.134673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.134687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.134694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.134701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.134715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.144646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.144699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.144713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.144720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.144727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.144741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.154687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.154748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.154761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.154769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.154775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.154790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.164683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.164739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.164756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.164764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.164770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.164785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.174735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.174792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.174806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.174813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.174820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.174834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.184762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.184817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.184832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.184839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.184846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.184860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.194789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.194847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.194864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.194872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.194879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.194893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.204828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.204889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.204903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.204910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.204917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.204931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.214782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.214879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.214892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.214900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.214907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.214921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.224842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.224893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.224906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.224914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.224921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.869 [2024-11-06 11:11:17.224935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.869 qpair failed and we were unable to recover it. 00:29:25.869 [2024-11-06 11:11:17.234891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.869 [2024-11-06 11:11:17.234952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.869 [2024-11-06 11:11:17.234966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.869 [2024-11-06 11:11:17.234973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.869 [2024-11-06 11:11:17.234983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.870 [2024-11-06 11:11:17.234997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.870 qpair failed and we were unable to recover it. 00:29:25.870 [2024-11-06 11:11:17.244933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.870 [2024-11-06 11:11:17.244995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.870 [2024-11-06 11:11:17.245008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.870 [2024-11-06 11:11:17.245015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.870 [2024-11-06 11:11:17.245022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.870 [2024-11-06 11:11:17.245035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.870 qpair failed and we were unable to recover it. 00:29:25.870 [2024-11-06 11:11:17.254945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.870 [2024-11-06 11:11:17.255001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.870 [2024-11-06 11:11:17.255014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.870 [2024-11-06 11:11:17.255021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.870 [2024-11-06 11:11:17.255028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.870 [2024-11-06 11:11:17.255041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.870 qpair failed and we were unable to recover it. 00:29:25.870 [2024-11-06 11:11:17.264980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.870 [2024-11-06 11:11:17.265032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.870 [2024-11-06 11:11:17.265045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.870 [2024-11-06 11:11:17.265053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.870 [2024-11-06 11:11:17.265059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.870 [2024-11-06 11:11:17.265073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.870 qpair failed and we were unable to recover it. 00:29:25.870 [2024-11-06 11:11:17.275032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.870 [2024-11-06 11:11:17.275090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.870 [2024-11-06 11:11:17.275103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.870 [2024-11-06 11:11:17.275110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.870 [2024-11-06 11:11:17.275117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.870 [2024-11-06 11:11:17.275130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.870 qpair failed and we were unable to recover it. 00:29:25.870 [2024-11-06 11:11:17.285051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.870 [2024-11-06 11:11:17.285102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.870 [2024-11-06 11:11:17.285115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.870 [2024-11-06 11:11:17.285123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.870 [2024-11-06 11:11:17.285129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:25.870 [2024-11-06 11:11:17.285144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.870 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.295095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.295153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.295167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.295174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.295181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.295194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.305086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.305138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.305151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.305158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.305165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.305179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.315167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.315270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.315284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.315291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.315298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.315311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.325133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.325184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.325200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.325208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.325214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.325228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.335185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.335238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.335251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.335258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.335265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.335278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.345216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.345265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.345278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.345286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.345292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.345306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.355209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.355285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.355299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.355306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.355312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.355326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.365234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.365288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.365302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.365309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.365319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.133 [2024-11-06 11:11:17.365333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.133 qpair failed and we were unable to recover it. 00:29:26.133 [2024-11-06 11:11:17.375273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.133 [2024-11-06 11:11:17.375327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.133 [2024-11-06 11:11:17.375340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.133 [2024-11-06 11:11:17.375348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.133 [2024-11-06 11:11:17.375354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.375368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.385274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.385341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.385354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.385361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.385368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.385381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.395330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.395386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.395399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.395407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.395413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.395427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.405344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.405393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.405407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.405414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.405421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.405435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.415372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.415424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.415437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.415444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.415451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.415465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.425418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.425483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.425496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.425503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.425510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.425523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.435451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.435513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.435538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.435547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.435554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.435573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.445441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.445498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.445524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.445533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.445540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.445559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.455506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.455557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.455577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.455585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.455592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.455607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.465447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.465499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.465513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.465520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.465527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.465541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.475561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.475618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.475631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.475639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.475645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.475660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.485538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.485588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.485602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.485609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.485616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.485630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.495590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.495652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.495667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.495675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.495687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.495701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.134 qpair failed and we were unable to recover it. 00:29:26.134 [2024-11-06 11:11:17.505599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.134 [2024-11-06 11:11:17.505655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.134 [2024-11-06 11:11:17.505669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.134 [2024-11-06 11:11:17.505676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.134 [2024-11-06 11:11:17.505683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.134 [2024-11-06 11:11:17.505698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.135 qpair failed and we were unable to recover it. 00:29:26.135 [2024-11-06 11:11:17.515692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.135 [2024-11-06 11:11:17.515786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.135 [2024-11-06 11:11:17.515800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.135 [2024-11-06 11:11:17.515808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.135 [2024-11-06 11:11:17.515815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.135 [2024-11-06 11:11:17.515829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.135 qpair failed and we were unable to recover it. 00:29:26.135 [2024-11-06 11:11:17.525667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.135 [2024-11-06 11:11:17.525715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.135 [2024-11-06 11:11:17.525729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.135 [2024-11-06 11:11:17.525736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.135 [2024-11-06 11:11:17.525743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.135 [2024-11-06 11:11:17.525762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.135 qpair failed and we were unable to recover it. 00:29:26.135 [2024-11-06 11:11:17.535705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.135 [2024-11-06 11:11:17.535770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.135 [2024-11-06 11:11:17.535784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.135 [2024-11-06 11:11:17.535791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.135 [2024-11-06 11:11:17.535798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.135 [2024-11-06 11:11:17.535812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.135 qpair failed and we were unable to recover it. 00:29:26.135 [2024-11-06 11:11:17.545705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.135 [2024-11-06 11:11:17.545763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.135 [2024-11-06 11:11:17.545777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.135 [2024-11-06 11:11:17.545785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.135 [2024-11-06 11:11:17.545791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.135 [2024-11-06 11:11:17.545805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.135 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.555776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.555830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.555843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.555851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.555857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.555871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.565772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.565822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.565836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.565844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.565850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.565864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.575859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.575909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.575923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.575930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.575936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.575951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.585817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.585869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.585886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.585893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.585900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.585914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.595884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.595942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.595955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.595963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.595969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.595984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.605929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.606008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.606021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.606029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.606036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.606050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.615814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.615882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.615895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.615902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.615909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.615922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.625919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.625973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.625986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.625994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.626003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.626017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.636005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.636063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.636076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.636084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.636090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.636104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.646001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.646056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.646070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.399 [2024-11-06 11:11:17.646077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.399 [2024-11-06 11:11:17.646083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.399 [2024-11-06 11:11:17.646097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.399 qpair failed and we were unable to recover it. 00:29:26.399 [2024-11-06 11:11:17.656063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.399 [2024-11-06 11:11:17.656113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.399 [2024-11-06 11:11:17.656126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.656134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.656140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.656154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.666043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.666090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.666103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.666111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.666117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.666131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.676123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.676178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.676192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.676199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.676205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.676219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.686067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.686119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.686133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.686140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.686147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.686161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.696155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.696208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.696221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.696229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.696235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.696249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.706143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.706209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.706225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.706232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.706239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.706253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.716226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.716281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.716298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.716305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.716312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.716325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.726224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.726274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.726287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.726295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.726301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.726315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.736269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.736319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.736332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.736339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.736346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.736360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.746251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.746298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.746312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.746319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.746326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.746340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.756342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.756400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.756413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.756421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.756434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.756448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.766328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.766386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.766399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.766407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.766413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.766427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.776387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.400 [2024-11-06 11:11:17.776440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.400 [2024-11-06 11:11:17.776454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.400 [2024-11-06 11:11:17.776462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.400 [2024-11-06 11:11:17.776468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.400 [2024-11-06 11:11:17.776482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.400 qpair failed and we were unable to recover it. 00:29:26.400 [2024-11-06 11:11:17.786369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.401 [2024-11-06 11:11:17.786430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.401 [2024-11-06 11:11:17.786455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.401 [2024-11-06 11:11:17.786464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.401 [2024-11-06 11:11:17.786472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.401 [2024-11-06 11:11:17.786492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.401 qpair failed and we were unable to recover it. 00:29:26.401 [2024-11-06 11:11:17.796443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.401 [2024-11-06 11:11:17.796498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.401 [2024-11-06 11:11:17.796514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.401 [2024-11-06 11:11:17.796522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.401 [2024-11-06 11:11:17.796529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.401 [2024-11-06 11:11:17.796544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.401 qpair failed and we were unable to recover it. 00:29:26.401 [2024-11-06 11:11:17.806467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.401 [2024-11-06 11:11:17.806525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.401 [2024-11-06 11:11:17.806551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.401 [2024-11-06 11:11:17.806560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.401 [2024-11-06 11:11:17.806567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.401 [2024-11-06 11:11:17.806587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.401 qpair failed and we were unable to recover it. 00:29:26.401 [2024-11-06 11:11:17.816498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.401 [2024-11-06 11:11:17.816563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.401 [2024-11-06 11:11:17.816589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.401 [2024-11-06 11:11:17.816598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.401 [2024-11-06 11:11:17.816605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.401 [2024-11-06 11:11:17.816625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.401 qpair failed and we were unable to recover it. 00:29:26.663 [2024-11-06 11:11:17.826490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.663 [2024-11-06 11:11:17.826570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.663 [2024-11-06 11:11:17.826585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.663 [2024-11-06 11:11:17.826593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.663 [2024-11-06 11:11:17.826601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.663 [2024-11-06 11:11:17.826616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.663 qpair failed and we were unable to recover it. 00:29:26.663 [2024-11-06 11:11:17.836576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.663 [2024-11-06 11:11:17.836651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.663 [2024-11-06 11:11:17.836665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.836673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.836680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.836695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.846561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.846620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.846638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.846646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.846652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.846667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.856611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.856694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.856708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.856716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.856723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.856738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.866596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.866650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.866663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.866671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.866677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.866692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.876667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.876725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.876740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.876752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.876759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.876774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.886661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.886712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.886727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.886734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.886744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.886772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.896739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.896794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.896808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.896816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.896822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.896836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.906724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.906775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.906789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.906797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.906803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.906817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.916670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.916736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.916753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.916761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.916767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.916781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.926779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.926836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.926849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.926857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.926863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.926877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.936768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.936819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.936833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.936840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.936847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.936861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.946783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.946854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.946867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.946875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.946881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.946895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.956869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.664 [2024-11-06 11:11:17.956955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.664 [2024-11-06 11:11:17.956969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.664 [2024-11-06 11:11:17.956976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.664 [2024-11-06 11:11:17.956983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.664 [2024-11-06 11:11:17.956997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.664 qpair failed and we were unable to recover it. 00:29:26.664 [2024-11-06 11:11:17.966876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:17.966925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:17.966938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:17.966946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:17.966953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:17.966966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:17.976939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:17.976994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:17.977010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:17.977017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:17.977024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:17.977038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:17.987038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:17.987094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:17.987108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:17.987115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:17.987122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:17.987136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:17.997061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:17.997118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:17.997131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:17.997139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:17.997145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:17.997160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.007022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.007077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.007091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.007099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.007106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.007120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.017093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.017145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.017159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.017169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.017176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.017191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.027079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.027139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.027152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.027160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.027167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.027181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.037123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.037181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.037195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.037202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.037209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.037223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.047119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.047199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.047213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.047221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.047228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.047242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.057120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.057173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.057186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.057194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.057201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.057215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.067130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.067184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.067198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.067206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.067212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.067227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.665 [2024-11-06 11:11:18.077200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.665 [2024-11-06 11:11:18.077259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.665 [2024-11-06 11:11:18.077273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.665 [2024-11-06 11:11:18.077281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.665 [2024-11-06 11:11:18.077287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.665 [2024-11-06 11:11:18.077302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.665 qpair failed and we were unable to recover it. 00:29:26.927 [2024-11-06 11:11:18.087093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.927 [2024-11-06 11:11:18.087147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.927 [2024-11-06 11:11:18.087161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.927 [2024-11-06 11:11:18.087169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.927 [2024-11-06 11:11:18.087175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.927 [2024-11-06 11:11:18.087189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.927 qpair failed and we were unable to recover it. 00:29:26.927 [2024-11-06 11:11:18.097270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.927 [2024-11-06 11:11:18.097323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.927 [2024-11-06 11:11:18.097337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.097345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.097351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.097365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.107259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.107308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.107325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.107332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.107339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.107352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.117345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.117444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.117458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.117465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.117472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.117486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.127340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.127391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.127404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.127412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.127418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.127433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.137278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.137330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.137346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.137353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.137360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.137375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.147356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.147405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.147420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.147431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.147438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.147453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.157444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.157513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.157538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.157547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.157555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.157574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.167420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.167472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.167488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.167496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.167502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.167518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.177502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.177552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.177566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.177574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.177581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.177595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.187414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.187480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.187495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.187502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.187509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.187523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.197560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.197615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.197629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.197637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.197643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.197658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.207551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.207604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.207617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.207625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.207632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.207646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.928 qpair failed and we were unable to recover it. 00:29:26.928 [2024-11-06 11:11:18.217604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.928 [2024-11-06 11:11:18.217689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.928 [2024-11-06 11:11:18.217702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.928 [2024-11-06 11:11:18.217711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.928 [2024-11-06 11:11:18.217717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.928 [2024-11-06 11:11:18.217731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.227542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.227591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.227605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.227612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.227618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.227632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.237663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.237717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.237735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.237742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.237753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.237768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.247605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.247682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.247696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.247703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.247710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.247725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.257701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.257759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.257773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.257780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.257787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.257802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.267680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.267734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.267752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.267760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.267766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.267781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.277768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.277825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.277838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.277850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.277857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.277871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.287756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.287811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.287825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.287833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.287840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.287854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.297820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.297900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.297914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.297922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.297928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.297943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.307770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.307821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.307835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.307842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.307849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.307863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.317870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.317927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.317940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.317948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.317954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.317968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.327836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.327892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.327905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.327913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.327919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.327933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:26.929 [2024-11-06 11:11:18.337920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.929 [2024-11-06 11:11:18.338012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.929 [2024-11-06 11:11:18.338026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.929 [2024-11-06 11:11:18.338034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.929 [2024-11-06 11:11:18.338040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:26.929 [2024-11-06 11:11:18.338054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.929 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.347911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.347963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.347977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.347984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.347991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.348005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.357996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.358050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.358064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.358072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.358078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.358092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.368000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.368055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.368069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.368076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.368083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.368097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.378005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.378061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.378075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.378082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.378089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.378103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.388022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.388106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.388120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.388128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.388135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.388149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.398099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.398155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.398168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.398175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.398182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.398196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.408102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.408156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.408169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.408180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.408187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.408200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.418040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.418095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.418109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.418116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.418123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.418137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.428131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.428181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.428195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.428202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.428209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.428223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.438196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.438254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.438267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.438275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.438282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.438296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.448172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.448228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.192 [2024-11-06 11:11:18.448242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.192 [2024-11-06 11:11:18.448249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.192 [2024-11-06 11:11:18.448256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.192 [2024-11-06 11:11:18.448270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.192 qpair failed and we were unable to recover it. 00:29:27.192 [2024-11-06 11:11:18.458197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.192 [2024-11-06 11:11:18.458250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.458263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.458271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.458278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.458291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.468233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.468281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.468294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.468302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.468308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.468322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.478316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.478371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.478383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.478391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.478398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.478412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.488315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.488412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.488427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.488435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.488441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.488455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.498312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.498367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.498381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.498388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.498395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.498408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.508340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.508394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.508408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.508415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.508422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.508436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.518436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.518542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.518556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.518564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.518571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.518585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.528301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.528363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.528388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.528397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.528405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.528425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.538441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.538492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.538507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.538519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.538526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.538541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.548473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.548529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.548555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.548565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.548572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.548592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.558536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.558599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.558625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.558634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.558642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.558662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.568479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.568534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.568559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.568568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.568575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.568595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.578587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.578665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.578690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-11-06 11:11:18.578700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-11-06 11:11:18.578707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.193 [2024-11-06 11:11:18.578727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-11-06 11:11:18.588573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-11-06 11:11:18.588625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-11-06 11:11:18.588642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-11-06 11:11:18.588650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-11-06 11:11:18.588657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.194 [2024-11-06 11:11:18.588673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-11-06 11:11:18.598650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-11-06 11:11:18.598710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-11-06 11:11:18.598725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-11-06 11:11:18.598732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-11-06 11:11:18.598739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.194 [2024-11-06 11:11:18.598759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-11-06 11:11:18.608638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-11-06 11:11:18.608693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-11-06 11:11:18.608707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-11-06 11:11:18.608715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-11-06 11:11:18.608721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.194 [2024-11-06 11:11:18.608735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.618663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.618711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.618725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.618732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.618739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.618756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.628690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.628742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.628760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.628767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.628774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.628789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.638756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.638816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.638830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.638837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.638844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.638858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.648761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.648811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.648825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.648832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.648839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.648853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.658804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.658858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.658872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.658879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.658886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.658900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.668786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.668835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.668849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.668861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.668867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.668882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.678848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.678903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.678917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.678924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.678931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.678945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.688875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.688924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.688937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.688945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.688951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.688965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.698865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.698912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.698927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.698935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.698941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.698957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.708872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.708917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.708931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.708938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.708945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.708963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.719000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.719058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.719072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.719079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-11-06 11:11:18.719086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.457 [2024-11-06 11:11:18.719099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-11-06 11:11:18.728969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-11-06 11:11:18.729018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-11-06 11:11:18.729032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-11-06 11:11:18.729039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.729046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.729059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.738861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.738911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.738925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.738932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.738939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.738952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.748978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.749026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.749040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.749047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.749053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.749067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.759078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.759144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.759157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.759165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.759172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.759185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.769061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.769132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.769146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.769153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.769159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.769173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.778966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.779014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.779029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.779036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.779043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.779058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.789109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.789160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.789174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.789182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.789188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.789202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.799185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.799239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.799253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.799264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.799270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.799285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.809186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.809268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.809282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.809289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.809296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.809310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.819197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.819246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.819259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.819266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.819273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.819286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.829209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.829258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.829271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.829279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.829285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.829299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.839279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.839334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.839347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.839355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.839361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.839380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.849281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.849374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.849389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.849396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.849403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.458 [2024-11-06 11:11:18.849417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.458 [2024-11-06 11:11:18.859280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.458 [2024-11-06 11:11:18.859327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.458 [2024-11-06 11:11:18.859341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.458 [2024-11-06 11:11:18.859348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.458 [2024-11-06 11:11:18.859355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.459 [2024-11-06 11:11:18.859369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.459 qpair failed and we were unable to recover it. 00:29:27.459 [2024-11-06 11:11:18.869301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.459 [2024-11-06 11:11:18.869353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.459 [2024-11-06 11:11:18.869366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.459 [2024-11-06 11:11:18.869373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.459 [2024-11-06 11:11:18.869380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.459 [2024-11-06 11:11:18.869394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.459 qpair failed and we were unable to recover it. 00:29:27.720 [2024-11-06 11:11:18.879393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-11-06 11:11:18.879446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-11-06 11:11:18.879460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-11-06 11:11:18.879468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-11-06 11:11:18.879475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.720 [2024-11-06 11:11:18.879488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-11-06 11:11:18.889348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-11-06 11:11:18.889406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.889420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.889427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.889434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.889448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.899405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.899461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.899487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.899496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.899504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.899524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.909441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.909537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.909562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.909572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.909579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.909598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.919512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.919578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.919603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.919612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.919618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.919638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.929514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.929567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.929582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.929595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.929601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.929617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.939520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.939567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.939580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.939588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.939595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.939609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.949553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.949644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.949658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.949665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.949672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.949686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.959623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.959680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.959693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.959701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.959708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.959722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.969610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.969667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.969680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.969688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.969694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.969712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.979607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.979659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.979673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.979680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.979686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.979700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.989591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.989663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.989677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.989684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.989691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.989706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:18.999736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:18.999835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:18.999849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:18.999857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:18.999863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:18.999878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:19.009729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:19.009785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:19.009800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:19.009807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:19.009814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:19.009828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:19.019755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:19.019815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:19.019829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:19.019837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:19.019843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:19.019857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:19.029743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:19.029799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:19.029813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:19.029821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:19.029827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:19.029842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:19.039850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:19.039908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:19.039922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:19.039929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:19.039935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:19.039949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:19.049841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:19.049891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:19.049905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:19.049912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:19.049919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:19.049933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:19.059873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-11-06 11:11:19.059921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-11-06 11:11:19.059935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-11-06 11:11:19.059946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-11-06 11:11:19.059953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.721 [2024-11-06 11:11:19.059967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-11-06 11:11:19.069900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.722 [2024-11-06 11:11:19.069952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.722 [2024-11-06 11:11:19.069966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.722 [2024-11-06 11:11:19.069974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.722 [2024-11-06 11:11:19.069980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.722 [2024-11-06 11:11:19.069994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.722 qpair failed and we were unable to recover it. 00:29:27.722 [2024-11-06 11:11:19.079976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.722 [2024-11-06 11:11:19.080033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.722 [2024-11-06 11:11:19.080046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.722 [2024-11-06 11:11:19.080054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.722 [2024-11-06 11:11:19.080060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.722 [2024-11-06 11:11:19.080074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.722 qpair failed and we were unable to recover it. 00:29:27.722 [2024-11-06 11:11:19.089992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.722 [2024-11-06 11:11:19.090059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.722 [2024-11-06 11:11:19.090073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.722 [2024-11-06 11:11:19.090080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.722 [2024-11-06 11:11:19.090087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.722 [2024-11-06 11:11:19.090100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.722 qpair failed and we were unable to recover it. 00:29:27.722 [2024-11-06 11:11:19.099888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.722 [2024-11-06 11:11:19.099941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.722 [2024-11-06 11:11:19.099955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.722 [2024-11-06 11:11:19.099963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.722 [2024-11-06 11:11:19.099969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.722 [2024-11-06 11:11:19.099987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.722 qpair failed and we were unable to recover it. 00:29:27.722 [2024-11-06 11:11:19.109892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.722 [2024-11-06 11:11:19.109943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.722 [2024-11-06 11:11:19.109957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.722 [2024-11-06 11:11:19.109964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.722 [2024-11-06 11:11:19.109971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.722 [2024-11-06 11:11:19.109984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.722 qpair failed and we were unable to recover it. 00:29:27.722 [2024-11-06 11:11:19.120090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.722 [2024-11-06 11:11:19.120146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.722 [2024-11-06 11:11:19.120160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.722 [2024-11-06 11:11:19.120167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.722 [2024-11-06 11:11:19.120174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.722 [2024-11-06 11:11:19.120188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.722 qpair failed and we were unable to recover it. 00:29:27.722 [2024-11-06 11:11:19.130079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.722 [2024-11-06 11:11:19.130130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.722 [2024-11-06 11:11:19.130144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.722 [2024-11-06 11:11:19.130151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.722 [2024-11-06 11:11:19.130158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.722 [2024-11-06 11:11:19.130172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.722 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.140082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.140133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.140147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.140154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.140161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.140175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.150125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.150174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.150188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.150195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.150202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.150216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.160195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.160249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.160263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.160270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.160277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.160291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.170195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.170244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.170258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.170265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.170271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.170285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.180207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.180269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.180282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.180289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.180296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.180309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.190230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.190284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.190298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.190308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.190315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.190329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.200276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.200338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.200352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.200359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.200366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.200380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.210298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.210352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.210365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.210372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.210379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.210393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.220311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.220362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.220375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.220383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.220390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.220404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.230341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.230388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.230402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.230409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.230416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.230433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.240400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.240457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.240470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-11-06 11:11:19.240478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-11-06 11:11:19.240484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.984 [2024-11-06 11:11:19.240498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-11-06 11:11:19.250411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-11-06 11:11:19.250468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-11-06 11:11:19.250493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.250502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.250510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.250529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.260422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.260478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.260503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.260513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.260520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.260539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.270442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.270527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.270553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.270562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.270569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.270589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.280506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.280572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.280597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.280607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.280614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.280633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.290501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.290556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.290572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.290580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.290587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.290602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.300504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.300557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.300572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.300580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.300587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.300601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.310544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.310607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.310621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.310628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.310635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.310649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.320625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.320679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.320693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.320704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.320711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.320725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.330616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.330669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.330683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.330690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.330697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.330711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.340622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.340672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.340686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.340693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.340700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.340714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.350643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.350695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.350709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.350716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.350723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.350737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.360725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.360786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.360801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.360808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.360814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.360832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.370729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.370785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.370799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.370806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.370813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.985 [2024-11-06 11:11:19.370827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-11-06 11:11:19.380724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-11-06 11:11:19.380779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-11-06 11:11:19.380793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-11-06 11:11:19.380800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-11-06 11:11:19.380807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.986 [2024-11-06 11:11:19.380820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-11-06 11:11:19.390732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-11-06 11:11:19.390787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-11-06 11:11:19.390802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-11-06 11:11:19.390809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-11-06 11:11:19.390816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.986 [2024-11-06 11:11:19.390831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-11-06 11:11:19.400829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-11-06 11:11:19.400885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-11-06 11:11:19.400899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-11-06 11:11:19.400907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-11-06 11:11:19.400914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:27.986 [2024-11-06 11:11:19.400927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.986 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.410919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.410981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.410995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.411003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.411009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.411023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.420708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.420760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.420774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.420782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.420788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.420803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.430841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.430893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.430906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.430914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.430921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.430935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.440935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.440993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.441006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.441014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.441021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.441034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.450932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.450987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.451001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.451016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.451023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.451036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.460930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.460983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.460997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.461004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.461011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.461025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.470940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.470990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.471003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.471011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.471017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.471031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.481065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.481121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.481135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.481142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.481149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.481162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.248 qpair failed and we were unable to recover it. 00:29:28.248 [2024-11-06 11:11:19.491029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.248 [2024-11-06 11:11:19.491081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.248 [2024-11-06 11:11:19.491095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.248 [2024-11-06 11:11:19.491102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.248 [2024-11-06 11:11:19.491108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.248 [2024-11-06 11:11:19.491125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.501035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.501088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.501102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.501109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.501116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.501130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.511063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.511111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.511124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.511132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.511138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.511152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.521148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.521206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.521219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.521227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.521233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.521247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.531144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.531197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.531210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.531218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.531224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.531238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.541159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.541210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.541224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.541232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.541238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.541252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.551154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.551202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.551215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.551223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.551230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.551244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.561265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.561337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.561351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.561358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.561364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.561379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.571309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.571399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.571412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.571420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.571426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.571440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.581265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.581314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.581328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.581339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.581345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.581359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.591187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.591244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.591259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.591267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.591273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.591288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.601380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.601441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.601455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.601462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.601469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.601483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.611376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.611435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.611449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.611456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.611463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.611477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-11-06 11:11:19.621391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-11-06 11:11:19.621441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-11-06 11:11:19.621455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-11-06 11:11:19.621463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-11-06 11:11:19.621470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.249 [2024-11-06 11:11:19.621487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.250 [2024-11-06 11:11:19.631412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-11-06 11:11:19.631458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-11-06 11:11:19.631472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-11-06 11:11:19.631479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-11-06 11:11:19.631486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.250 [2024-11-06 11:11:19.631500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-11-06 11:11:19.641483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-11-06 11:11:19.641577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-11-06 11:11:19.641590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-11-06 11:11:19.641597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-11-06 11:11:19.641604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.250 [2024-11-06 11:11:19.641617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-11-06 11:11:19.651362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-11-06 11:11:19.651416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-11-06 11:11:19.651432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-11-06 11:11:19.651439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-11-06 11:11:19.651446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.250 [2024-11-06 11:11:19.651460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-11-06 11:11:19.661503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-11-06 11:11:19.661551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-11-06 11:11:19.661566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-11-06 11:11:19.661573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-11-06 11:11:19.661579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.250 [2024-11-06 11:11:19.661593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.671493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.671539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.671553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.671560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.671566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.671580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.681594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.681653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.681667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.681674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.681680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.681694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.691602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.691660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.691674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.691682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.691689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.691703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.701602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.701656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.701672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.701679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.701686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.701700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.711658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.711706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.711721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.711732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.711739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.711758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.721724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.721840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.721854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.721861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.721869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.721883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.731695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.731781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.731795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.731802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.731810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.731824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.741709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.741807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.741821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.741828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.741835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.513 [2024-11-06 11:11:19.741849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-11-06 11:11:19.751732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-11-06 11:11:19.751787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-11-06 11:11:19.751801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-11-06 11:11:19.751809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-11-06 11:11:19.751815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.751833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.761810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.761868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.761882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.761889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.761896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.761911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.771779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.771833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.771847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.771855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.771862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.771876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.781798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.781846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.781860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.781867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.781874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.781887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.791841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.791896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.791910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.791917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.791924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.791938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.801913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.801981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.801995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.802002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.802009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.802023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.811921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.812028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.812042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.812050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.812056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.812070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.821981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.822028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.822042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.822049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.822056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.822070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.831987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.832035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.832050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.832057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.832064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.832077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.842035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.842090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.842107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.842115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.842121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.842135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.851915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.851972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.851986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.851994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.852000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.852014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.862047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.862099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.862113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.862120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.862127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.862141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.872012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.872075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.872089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.872096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.872103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.872117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-11-06 11:11:19.882126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-11-06 11:11:19.882183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-11-06 11:11:19.882196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-11-06 11:11:19.882203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-11-06 11:11:19.882210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.514 [2024-11-06 11:11:19.882227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.515 [2024-11-06 11:11:19.892114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.515 [2024-11-06 11:11:19.892161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.515 [2024-11-06 11:11:19.892176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.515 [2024-11-06 11:11:19.892183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.515 [2024-11-06 11:11:19.892190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.515 [2024-11-06 11:11:19.892205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.515 qpair failed and we were unable to recover it. 00:29:28.515 [2024-11-06 11:11:19.902153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.515 [2024-11-06 11:11:19.902201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.515 [2024-11-06 11:11:19.902214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.515 [2024-11-06 11:11:19.902222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.515 [2024-11-06 11:11:19.902229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.515 [2024-11-06 11:11:19.902243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.515 qpair failed and we were unable to recover it. 00:29:28.515 [2024-11-06 11:11:19.912182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.515 [2024-11-06 11:11:19.912235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.515 [2024-11-06 11:11:19.912249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.515 [2024-11-06 11:11:19.912256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.515 [2024-11-06 11:11:19.912263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.515 [2024-11-06 11:11:19.912277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.515 qpair failed and we were unable to recover it. 00:29:28.515 [2024-11-06 11:11:19.922252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.515 [2024-11-06 11:11:19.922308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.515 [2024-11-06 11:11:19.922322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.515 [2024-11-06 11:11:19.922330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.515 [2024-11-06 11:11:19.922336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.515 [2024-11-06 11:11:19.922350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.515 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:19.932258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:19.932310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:19.932324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:19.932332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:19.932338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:19.932352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:19.942268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:19.942348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:19.942361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:19.942369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:19.942377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:19.942391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:19.952269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:19.952320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:19.952334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:19.952341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:19.952348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:19.952361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:19.962363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:19.962423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:19.962439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:19.962447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:19.962455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:19.962472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:19.972363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:19.972458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:19.972476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:19.972484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:19.972490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:19.972505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:19.982240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:19.982290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:19.982304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:19.982311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:19.982318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:19.982332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:19.992398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:19.992445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:19.992460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:19.992467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:19.992473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:19.992487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:20.002808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:20.002914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:20.002929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:20.002937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:20.002944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:20.002958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:20.012716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:20.012765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:20.012779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:20.012787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:20.012793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:20.012811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:20.022723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:20.022773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:20.022787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:20.022795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:20.022801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:20.022816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:20.032795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:20.032841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:20.032854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:20.032862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:20.032869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:20.032883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-06 11:11:20.042837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-11-06 11:11:20.042895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-11-06 11:11:20.042909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-11-06 11:11:20.042916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-11-06 11:11:20.042923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.776 [2024-11-06 11:11:20.042937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.052881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.052940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.052954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.052961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.052968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.052983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.062823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.062882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.062897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.062905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.062912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.062930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.072843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.072889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.072904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.072911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.072918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.072933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.082995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.083069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.083083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.083090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.083097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.083111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.093028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.093087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.093101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.093108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.093115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.093130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.102985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.103032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.103049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.103056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.103063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.103077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.112856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.112912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.112926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.112933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.112940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.112954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.122918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.122975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.122990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.122998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.123005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.123019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.133055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.133107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.133121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.133128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.133135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.133148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.143036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.143089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.143103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.143111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.143118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.143139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.153057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.153111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.153125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.153133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.153139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.777 [2024-11-06 11:11:20.153154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-06 11:11:20.163116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-11-06 11:11:20.163170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-11-06 11:11:20.163183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-11-06 11:11:20.163191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-11-06 11:11:20.163197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.778 [2024-11-06 11:11:20.163211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-06 11:11:20.173120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.778 [2024-11-06 11:11:20.173173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.778 [2024-11-06 11:11:20.173187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.778 [2024-11-06 11:11:20.173195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.778 [2024-11-06 11:11:20.173201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.778 [2024-11-06 11:11:20.173215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-06 11:11:20.183158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.778 [2024-11-06 11:11:20.183205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.778 [2024-11-06 11:11:20.183219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.778 [2024-11-06 11:11:20.183226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.778 [2024-11-06 11:11:20.183233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.778 [2024-11-06 11:11:20.183247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-06 11:11:20.193211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.778 [2024-11-06 11:11:20.193264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.778 [2024-11-06 11:11:20.193278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.778 [2024-11-06 11:11:20.193285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.778 [2024-11-06 11:11:20.193292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:28.778 [2024-11-06 11:11:20.193305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.778 qpair failed and we were unable to recover it. 00:29:29.040 [2024-11-06 11:11:20.203259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-11-06 11:11:20.203314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-11-06 11:11:20.203328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-11-06 11:11:20.203335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-11-06 11:11:20.203342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.040 [2024-11-06 11:11:20.203356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-11-06 11:11:20.213227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-11-06 11:11:20.213277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-11-06 11:11:20.213290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-11-06 11:11:20.213298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-11-06 11:11:20.213304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.040 [2024-11-06 11:11:20.213318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-11-06 11:11:20.223230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-11-06 11:11:20.223279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-11-06 11:11:20.223293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-11-06 11:11:20.223300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-11-06 11:11:20.223307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.040 [2024-11-06 11:11:20.223321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-11-06 11:11:20.233302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-11-06 11:11:20.233349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-11-06 11:11:20.233366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-11-06 11:11:20.233374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-11-06 11:11:20.233380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.040 [2024-11-06 11:11:20.233394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-11-06 11:11:20.243252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-11-06 11:11:20.243312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-11-06 11:11:20.243325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-11-06 11:11:20.243333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-11-06 11:11:20.243340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.040 [2024-11-06 11:11:20.243354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-11-06 11:11:20.253366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-11-06 11:11:20.253419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-11-06 11:11:20.253432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-11-06 11:11:20.253440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-11-06 11:11:20.253446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.040 [2024-11-06 11:11:20.253461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-11-06 11:11:20.263261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-11-06 11:11:20.263313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-11-06 11:11:20.263328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-11-06 11:11:20.263336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-11-06 11:11:20.263343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.263357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.273384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.273434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.273448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.273456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.273466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.273480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.283491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.283553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.283578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.283587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.283595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.283615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.293483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.293548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.293574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.293583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.293591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.293610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.303497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.303561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.303577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.303585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.303593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.303609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.313505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.313552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.313567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.313574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.313581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.313595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.323585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.323640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.323654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.323662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.323668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.323682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.333485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.333544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.333558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.333565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.333572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.333586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.343596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.343656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.343669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.343677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.343684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.343698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.353629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.353678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.353692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.353699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.353706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.353720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.363651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.363704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.363721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.363729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.363735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.363754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.373678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.373740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.373757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.373765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.373771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.373785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.383700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.383756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.383769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.383777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.383783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.041 [2024-11-06 11:11:20.383797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-11-06 11:11:20.393737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-11-06 11:11:20.393788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-11-06 11:11:20.393802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-11-06 11:11:20.393809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-11-06 11:11:20.393816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.042 [2024-11-06 11:11:20.393831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.042 qpair failed and we were unable to recover it. 00:29:29.042 [2024-11-06 11:11:20.403857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.042 [2024-11-06 11:11:20.403937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.042 [2024-11-06 11:11:20.403951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.042 [2024-11-06 11:11:20.403958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.042 [2024-11-06 11:11:20.403969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.042 [2024-11-06 11:11:20.403983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.042 qpair failed and we were unable to recover it. 00:29:29.042 [2024-11-06 11:11:20.413797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.042 [2024-11-06 11:11:20.413848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.042 [2024-11-06 11:11:20.413861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.042 [2024-11-06 11:11:20.413868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.042 [2024-11-06 11:11:20.413875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.042 [2024-11-06 11:11:20.413889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.042 qpair failed and we were unable to recover it. 00:29:29.042 [2024-11-06 11:11:20.423858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.042 [2024-11-06 11:11:20.423908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.042 [2024-11-06 11:11:20.423921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.042 [2024-11-06 11:11:20.423929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.042 [2024-11-06 11:11:20.423935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.042 [2024-11-06 11:11:20.423950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.042 qpair failed and we were unable to recover it. 00:29:29.042 [2024-11-06 11:11:20.433853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.042 [2024-11-06 11:11:20.433903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.042 [2024-11-06 11:11:20.433916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.042 [2024-11-06 11:11:20.433923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.042 [2024-11-06 11:11:20.433930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.042 [2024-11-06 11:11:20.433943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.042 qpair failed and we were unable to recover it. 00:29:29.042 [2024-11-06 11:11:20.443803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.042 [2024-11-06 11:11:20.443865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.042 [2024-11-06 11:11:20.443879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.042 [2024-11-06 11:11:20.443887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.042 [2024-11-06 11:11:20.443894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.042 [2024-11-06 11:11:20.443907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.042 qpair failed and we were unable to recover it. 00:29:29.042 [2024-11-06 11:11:20.453915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.042 [2024-11-06 11:11:20.453984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.042 [2024-11-06 11:11:20.453997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.042 [2024-11-06 11:11:20.454005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.042 [2024-11-06 11:11:20.454012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.042 [2024-11-06 11:11:20.454026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.042 qpair failed and we were unable to recover it. 00:29:29.303 [2024-11-06 11:11:20.463814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.463864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.463877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.463885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.463892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.463906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.473951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.474014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.474027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.474035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.474042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.474056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.484045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.484103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.484116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.484123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.484130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.484144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.494031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.494082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.494100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.494108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.494114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.494128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.504020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.504101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.504115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.504122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.504129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.504143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.514060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.514157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.514171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.514179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.514185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.514199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.524119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.524178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.524191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.524199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.524205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.524219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.534115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.534167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.534180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.534188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.534198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.534212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.544038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.544087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.544100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.544108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.544114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.544128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.554187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.554234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.554248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.554255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.554261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.554275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.564226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.564303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.564316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.564324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.564330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.564344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.574256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.574310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.574323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.574331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.574338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.574352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-11-06 11:11:20.584265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-11-06 11:11:20.584313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-11-06 11:11:20.584326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-11-06 11:11:20.584334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-11-06 11:11:20.584341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.304 [2024-11-06 11:11:20.584355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.594259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.594312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.594328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.594335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.594342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.594357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.604413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.604470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.604484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.604492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.604499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.604513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.614378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.614426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.614440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.614447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.614454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.614468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.624402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.624450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.624467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.624474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.624480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.624494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.634276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.634320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.634333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.634340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.634346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.634361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.644482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.644538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.644551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.644559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.644565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.644579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.654515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.654609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.654635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.654644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.654651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.654670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.664453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.664504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.664520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.664528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.664539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.664554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.674474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.674523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.674536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.674544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.674551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.674565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.684551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.684615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.684628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.684636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.684643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.684657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.694568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.694621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.694635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.694642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.694649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.305 [2024-11-06 11:11:20.694664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.305 [2024-11-06 11:11:20.704565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.305 [2024-11-06 11:11:20.704613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.305 [2024-11-06 11:11:20.704628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.305 [2024-11-06 11:11:20.704636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.305 [2024-11-06 11:11:20.704643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.306 [2024-11-06 11:11:20.704658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.306 qpair failed and we were unable to recover it. 00:29:29.306 [2024-11-06 11:11:20.714610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.306 [2024-11-06 11:11:20.714660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.306 [2024-11-06 11:11:20.714674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.306 [2024-11-06 11:11:20.714682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.306 [2024-11-06 11:11:20.714688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.306 [2024-11-06 11:11:20.714703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.306 qpair failed and we were unable to recover it. 00:29:29.567 [2024-11-06 11:11:20.724688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-11-06 11:11:20.724749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-11-06 11:11:20.724763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-11-06 11:11:20.724770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-11-06 11:11:20.724777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.567 [2024-11-06 11:11:20.724791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-11-06 11:11:20.734658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-11-06 11:11:20.734710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-11-06 11:11:20.734723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-11-06 11:11:20.734731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-11-06 11:11:20.734738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.567 [2024-11-06 11:11:20.734755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-11-06 11:11:20.744717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-11-06 11:11:20.744801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-11-06 11:11:20.744814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-11-06 11:11:20.744822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-11-06 11:11:20.744829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.567 [2024-11-06 11:11:20.744843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-11-06 11:11:20.754591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-11-06 11:11:20.754638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-11-06 11:11:20.754656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-11-06 11:11:20.754664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-11-06 11:11:20.754670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.567 [2024-11-06 11:11:20.754685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-11-06 11:11:20.764826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-11-06 11:11:20.764906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-11-06 11:11:20.764920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-11-06 11:11:20.764927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-11-06 11:11:20.764935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.567 [2024-11-06 11:11:20.764949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-11-06 11:11:20.774781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-11-06 11:11:20.774834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-11-06 11:11:20.774847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-11-06 11:11:20.774855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-11-06 11:11:20.774861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.567 [2024-11-06 11:11:20.774876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-11-06 11:11:20.784800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-11-06 11:11:20.784850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-11-06 11:11:20.784863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-11-06 11:11:20.784871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.784878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.784892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.794805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.794852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.794866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.794874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.794887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.794902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.804866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.804930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.804944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.804952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.804959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.804973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.814866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.814919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.814932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.814940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.814947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.814961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.824880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.824930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.824944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.824951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.824958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.824973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.834930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.834978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.834991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.834999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.835005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.835020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.845003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.845058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.845072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.845079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.845086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.845099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.854977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.855030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.855043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.855051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.855057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.855071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.864971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.865023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.865036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.865044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.865050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.865064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.875017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.875064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.875077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.875085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.875092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.875105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.885122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.885175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.885191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.885199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.885205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.885219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.895106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.895160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.895174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.895181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.895188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.895202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.905090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.905139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.905152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.905159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.905165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.568 [2024-11-06 11:11:20.905180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-11-06 11:11:20.915149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-11-06 11:11:20.915196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-11-06 11:11:20.915209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-11-06 11:11:20.915217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-11-06 11:11:20.915223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.915237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-11-06 11:11:20.925090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.569 [2024-11-06 11:11:20.925143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.569 [2024-11-06 11:11:20.925158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.569 [2024-11-06 11:11:20.925165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.569 [2024-11-06 11:11:20.925175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.925190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-11-06 11:11:20.935208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.569 [2024-11-06 11:11:20.935261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.569 [2024-11-06 11:11:20.935275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.569 [2024-11-06 11:11:20.935282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.569 [2024-11-06 11:11:20.935289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.935303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-11-06 11:11:20.945220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.569 [2024-11-06 11:11:20.945271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.569 [2024-11-06 11:11:20.945284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.569 [2024-11-06 11:11:20.945292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.569 [2024-11-06 11:11:20.945298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.945313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-11-06 11:11:20.955251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.569 [2024-11-06 11:11:20.955303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.569 [2024-11-06 11:11:20.955317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.569 [2024-11-06 11:11:20.955324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.569 [2024-11-06 11:11:20.955331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.955344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-11-06 11:11:20.965306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.569 [2024-11-06 11:11:20.965377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.569 [2024-11-06 11:11:20.965391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.569 [2024-11-06 11:11:20.965398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.569 [2024-11-06 11:11:20.965405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.965419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-11-06 11:11:20.975224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.569 [2024-11-06 11:11:20.975278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.569 [2024-11-06 11:11:20.975293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.569 [2024-11-06 11:11:20.975301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.569 [2024-11-06 11:11:20.975308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.975324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.569 [2024-11-06 11:11:20.985308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.569 [2024-11-06 11:11:20.985355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.569 [2024-11-06 11:11:20.985370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.569 [2024-11-06 11:11:20.985377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.569 [2024-11-06 11:11:20.985384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.569 [2024-11-06 11:11:20.985397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.569 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:20.995372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:20.995421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:20.995436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:20.995443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:20.995450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:20.995465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.005402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.005461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:21.005486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:21.005495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:21.005502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:21.005522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.015457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.015551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:21.015581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:21.015591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:21.015599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:21.015618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.025314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.025367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:21.025382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:21.025390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:21.025397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:21.025412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.035483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.035534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:21.035547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:21.035555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:21.035562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:21.035577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.045567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.045658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:21.045683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:21.045692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:21.045699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:21.045719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.055557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.055612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:21.055627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:21.055635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:21.055647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:21.055662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.065517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.065568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-11-06 11:11:21.065582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-11-06 11:11:21.065589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-11-06 11:11:21.065596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x20af0c0 00:29:29.830 [2024-11-06 11:11:21.065610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-11-06 11:11:21.075582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-11-06 11:11:21.075694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-11-06 11:11:21.075769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-11-06 11:11:21.075798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-11-06 11:11:21.075819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbdb4000b90 00:29:29.831 [2024-11-06 11:11:21.075875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-06 11:11:21.085661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-11-06 11:11:21.085765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-11-06 11:11:21.085794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-11-06 11:11:21.085809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-11-06 11:11:21.085822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbdb4000b90 00:29:29.831 [2024-11-06 11:11:21.085851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-06 11:11:21.095660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-11-06 11:11:21.095773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-11-06 11:11:21.095839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-11-06 11:11:21.095864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-11-06 11:11:21.095886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbda8000b90 00:29:29.831 [2024-11-06 11:11:21.095942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-06 11:11:21.105573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-11-06 11:11:21.105652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-11-06 11:11:21.105686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-11-06 11:11:21.105704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-11-06 11:11:21.105720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbda8000b90 00:29:29.831 [2024-11-06 11:11:21.105763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-06 11:11:21.115586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-11-06 11:11:21.115638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-11-06 11:11:21.115657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-11-06 11:11:21.115664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-11-06 11:11:21.115669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbdac000b90 00:29:29.831 [2024-11-06 11:11:21.115683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-06 11:11:21.125777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-11-06 11:11:21.125836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-11-06 11:11:21.125855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-11-06 11:11:21.125861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-11-06 11:11:21.125866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbdac000b90 00:29:29.831 [2024-11-06 11:11:21.125881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-11-06 11:11:21.126012] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:29.831 A controller has encountered a failure and is being reset. 00:29:29.831 [2024-11-06 11:11:21.126138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a4e00 (9): Bad file descriptor 00:29:30.092 Controller properly reset. 00:29:30.092 Initializing NVMe Controllers 00:29:30.092 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:30.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:30.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:30.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:30.092 Initialization complete. Launching workers. 00:29:30.092 Starting thread on core 1 00:29:30.092 Starting thread on core 2 00:29:30.092 Starting thread on core 3 00:29:30.092 Starting thread on core 0 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:30.092 00:29:30.092 real 0m11.576s 00:29:30.092 user 0m21.976s 00:29:30.092 sys 0m3.565s 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.092 ************************************ 00:29:30.092 END TEST nvmf_target_disconnect_tc2 00:29:30.092 ************************************ 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.092 rmmod nvme_tcp 00:29:30.092 rmmod nvme_fabrics 00:29:30.092 rmmod nvme_keyring 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3445290 ']' 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3445290 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3445290 ']' 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3445290 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3445290 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3445290' 00:29:30.092 killing process with pid 3445290 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3445290 00:29:30.092 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3445290 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.353 11:11:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.264 11:11:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.264 00:29:32.264 real 0m21.578s 00:29:32.264 user 0m50.370s 00:29:32.264 sys 0m9.381s 00:29:32.264 11:11:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:32.264 11:11:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:32.264 ************************************ 00:29:32.264 END TEST nvmf_target_disconnect 00:29:32.264 ************************************ 00:29:32.525 11:11:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:32.525 00:29:32.525 real 6m25.584s 00:29:32.525 user 11m23.820s 00:29:32.525 sys 2m9.384s 00:29:32.525 11:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:32.525 11:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.525 ************************************ 00:29:32.525 END TEST nvmf_host 00:29:32.525 ************************************ 00:29:32.525 11:11:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:32.525 11:11:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:32.525 11:11:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:32.525 11:11:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:32.525 11:11:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:32.525 11:11:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.525 ************************************ 00:29:32.525 START TEST nvmf_target_core_interrupt_mode 00:29:32.525 ************************************ 00:29:32.525 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:32.525 * Looking for test storage... 00:29:32.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:32.525 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:32.525 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:32.525 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.787 11:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.787 --rc genhtml_branch_coverage=1 00:29:32.787 --rc genhtml_function_coverage=1 00:29:32.787 --rc genhtml_legend=1 00:29:32.787 --rc geninfo_all_blocks=1 00:29:32.787 --rc geninfo_unexecuted_blocks=1 00:29:32.787 00:29:32.787 ' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.787 --rc genhtml_branch_coverage=1 00:29:32.787 --rc genhtml_function_coverage=1 00:29:32.787 --rc genhtml_legend=1 00:29:32.787 --rc geninfo_all_blocks=1 00:29:32.787 --rc geninfo_unexecuted_blocks=1 00:29:32.787 00:29:32.787 ' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.787 --rc genhtml_branch_coverage=1 00:29:32.787 --rc genhtml_function_coverage=1 00:29:32.787 --rc genhtml_legend=1 00:29:32.787 --rc geninfo_all_blocks=1 00:29:32.787 --rc geninfo_unexecuted_blocks=1 00:29:32.787 00:29:32.787 ' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:32.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.787 --rc genhtml_branch_coverage=1 00:29:32.787 --rc genhtml_function_coverage=1 00:29:32.787 --rc genhtml_legend=1 00:29:32.787 --rc geninfo_all_blocks=1 00:29:32.787 --rc geninfo_unexecuted_blocks=1 00:29:32.787 00:29:32.787 ' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.787 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.788 ************************************ 00:29:32.788 START TEST nvmf_abort 00:29:32.788 ************************************ 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:32.788 * Looking for test storage... 00:29:32.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:32.788 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.049 --rc genhtml_branch_coverage=1 00:29:33.049 --rc genhtml_function_coverage=1 00:29:33.049 --rc genhtml_legend=1 00:29:33.049 --rc geninfo_all_blocks=1 00:29:33.049 --rc geninfo_unexecuted_blocks=1 00:29:33.049 00:29:33.049 ' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.049 --rc genhtml_branch_coverage=1 00:29:33.049 --rc genhtml_function_coverage=1 00:29:33.049 --rc genhtml_legend=1 00:29:33.049 --rc geninfo_all_blocks=1 00:29:33.049 --rc geninfo_unexecuted_blocks=1 00:29:33.049 00:29:33.049 ' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.049 --rc genhtml_branch_coverage=1 00:29:33.049 --rc genhtml_function_coverage=1 00:29:33.049 --rc genhtml_legend=1 00:29:33.049 --rc geninfo_all_blocks=1 00:29:33.049 --rc geninfo_unexecuted_blocks=1 00:29:33.049 00:29:33.049 ' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.049 --rc genhtml_branch_coverage=1 00:29:33.049 --rc genhtml_function_coverage=1 00:29:33.049 --rc genhtml_legend=1 00:29:33.049 --rc geninfo_all_blocks=1 00:29:33.049 --rc geninfo_unexecuted_blocks=1 00:29:33.049 00:29:33.049 ' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.049 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.050 11:11:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.189 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.189 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.189 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.189 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.189 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.189 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:41.190 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:41.190 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:41.190 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:41.190 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:29:41.190 00:29:41.190 --- 10.0.0.2 ping statistics --- 00:29:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.190 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:29:41.190 00:29:41.190 --- 10.0.0.1 ping statistics --- 00:29:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.190 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.190 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3450854 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3450854 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3450854 ']' 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 [2024-11-06 11:11:31.626357] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:41.191 [2024-11-06 11:11:31.627310] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:29:41.191 [2024-11-06 11:11:31.627348] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.191 [2024-11-06 11:11:31.723245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.191 [2024-11-06 11:11:31.758023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.191 [2024-11-06 11:11:31.758060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.191 [2024-11-06 11:11:31.758068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.191 [2024-11-06 11:11:31.758074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.191 [2024-11-06 11:11:31.758080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.191 [2024-11-06 11:11:31.759354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.191 [2024-11-06 11:11:31.759513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.191 [2024-11-06 11:11:31.759514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.191 [2024-11-06 11:11:31.814551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:41.191 [2024-11-06 11:11:31.814590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:41.191 [2024-11-06 11:11:31.815190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:41.191 [2024-11-06 11:11:31.815513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 [2024-11-06 11:11:31.884276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 Malloc0 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 Delay0 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 [2024-11-06 11:11:31.980198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.191 11:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:41.191 [2024-11-06 11:11:32.149811] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:43.104 Initializing NVMe Controllers 00:29:43.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:43.104 controller IO queue size 128 less than required 00:29:43.104 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:43.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:43.104 Initialization complete. Launching workers. 00:29:43.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29049 00:29:43.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29106, failed to submit 66 00:29:43.104 success 29049, unsuccessful 57, failed 0 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.104 rmmod nvme_tcp 00:29:43.104 rmmod nvme_fabrics 00:29:43.104 rmmod nvme_keyring 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3450854 ']' 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3450854 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3450854 ']' 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3450854 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3450854 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3450854' 00:29:43.104 killing process with pid 3450854 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3450854 00:29:43.104 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3450854 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.365 11:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.912 00:29:45.912 real 0m12.622s 00:29:45.912 user 0m11.131s 00:29:45.912 sys 0m6.730s 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:45.912 ************************************ 00:29:45.912 END TEST nvmf_abort 00:29:45.912 ************************************ 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:45.912 ************************************ 00:29:45.912 START TEST nvmf_ns_hotplug_stress 00:29:45.912 ************************************ 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:45.912 * Looking for test storage... 00:29:45.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:45.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.912 --rc genhtml_branch_coverage=1 00:29:45.912 --rc genhtml_function_coverage=1 00:29:45.912 --rc genhtml_legend=1 00:29:45.912 --rc geninfo_all_blocks=1 00:29:45.912 --rc geninfo_unexecuted_blocks=1 00:29:45.912 00:29:45.912 ' 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:45.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.912 --rc genhtml_branch_coverage=1 00:29:45.912 --rc genhtml_function_coverage=1 00:29:45.912 --rc genhtml_legend=1 00:29:45.912 --rc geninfo_all_blocks=1 00:29:45.912 --rc geninfo_unexecuted_blocks=1 00:29:45.912 00:29:45.912 ' 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:45.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.912 --rc genhtml_branch_coverage=1 00:29:45.912 --rc genhtml_function_coverage=1 00:29:45.912 --rc genhtml_legend=1 00:29:45.912 --rc geninfo_all_blocks=1 00:29:45.912 --rc geninfo_unexecuted_blocks=1 00:29:45.912 00:29:45.912 ' 00:29:45.912 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:45.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.912 --rc genhtml_branch_coverage=1 00:29:45.912 --rc genhtml_function_coverage=1 00:29:45.913 --rc genhtml_legend=1 00:29:45.913 --rc geninfo_all_blocks=1 00:29:45.913 --rc geninfo_unexecuted_blocks=1 00:29:45.913 00:29:45.913 ' 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.913 11:11:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.913 11:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:52.498 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:52.499 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:52.499 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:52.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:52.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.499 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.500 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.760 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.760 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.760 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.760 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:29:52.760 00:29:52.760 --- 10.0.0.2 ping statistics --- 00:29:52.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.760 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:29:52.760 11:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:29:52.760 00:29:52.760 --- 10.0.0.1 ping statistics --- 00:29:52.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.760 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.760 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3455535 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3455535 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3455535 ']' 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:52.761 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:52.761 [2024-11-06 11:11:44.107944] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:52.761 [2024-11-06 11:11:44.109303] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:29:52.761 [2024-11-06 11:11:44.109355] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.022 [2024-11-06 11:11:44.210467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:53.022 [2024-11-06 11:11:44.261486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.022 [2024-11-06 11:11:44.261537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.022 [2024-11-06 11:11:44.261546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.022 [2024-11-06 11:11:44.261553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.022 [2024-11-06 11:11:44.261560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.022 [2024-11-06 11:11:44.263339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.022 [2024-11-06 11:11:44.263506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.022 [2024-11-06 11:11:44.263508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.022 [2024-11-06 11:11:44.339493] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.022 [2024-11-06 11:11:44.339552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.022 [2024-11-06 11:11:44.340188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:53.022 [2024-11-06 11:11:44.340464] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:53.593 11:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:53.853 [2024-11-06 11:11:45.116405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.853 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:54.113 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.113 [2024-11-06 11:11:45.484794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.113 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:54.374 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:54.635 Malloc0 00:29:54.635 11:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:54.895 Delay0 00:29:54.895 11:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.895 11:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:55.156 NULL1 00:29:55.156 11:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:55.417 11:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3455929 00:29:55.417 11:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:55.417 11:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:29:55.417 11:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.376 Read completed with error (sct=0, sc=11) 00:29:56.376 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:56.730 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:56.730 11:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:56.990 true 00:29:56.990 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:29:56.990 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.933 11:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.933 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:57.933 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:57.933 true 00:29:58.194 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:29:58.194 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.194 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.453 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:58.453 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:58.713 true 00:29:58.713 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:29:58.713 11:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.657 11:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.918 11:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:59.918 11:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:00.179 true 00:30:00.179 11:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:00.179 11:11:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.122 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.122 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:01.122 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:01.383 true 00:30:01.383 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:01.384 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.384 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.644 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:01.644 11:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:01.905 true 00:30:01.905 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:01.905 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.905 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.166 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:02.166 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:02.427 true 00:30:02.427 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:02.427 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.427 11:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.688 11:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:02.688 11:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:02.949 true 00:30:02.949 11:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:02.949 11:11:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.890 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.151 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:04.151 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:04.413 true 00:30:04.413 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:04.413 11:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.356 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.356 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:05.356 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:05.617 true 00:30:05.617 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:05.617 11:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.878 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.878 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:05.878 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:06.139 true 00:30:06.139 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:06.139 11:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.524 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.524 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:07.524 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:07.785 true 00:30:07.785 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:07.785 11:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.728 11:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.728 11:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:08.728 11:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:08.988 true 00:30:08.988 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:08.988 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.989 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.249 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:09.249 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:09.509 true 00:30:09.509 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:09.509 11:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.891 11:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.891 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:10.891 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:10.891 true 00:30:10.891 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:10.891 11:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.831 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.091 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:12.091 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:12.091 true 00:30:12.091 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:12.091 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.350 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.610 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:12.610 11:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:12.610 true 00:30:12.870 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:12.870 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.870 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.130 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:13.130 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:13.390 true 00:30:13.390 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:13.390 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.390 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.651 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:13.651 11:12:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:13.910 true 00:30:13.910 11:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:13.910 11:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.910 11:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.189 [2024-11-06 11:12:05.486090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.486982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.189 [2024-11-06 11:12:05.487918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.487946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.487977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.488997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.489970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.490967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.190 [2024-11-06 11:12:05.491522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.491998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.492988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.493990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.494661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.191 [2024-11-06 11:12:05.495233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.495965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.496981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.497969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.192 [2024-11-06 11:12:05.498392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.498980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.499963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.193 [2024-11-06 11:12:05.499995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.500983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.193 [2024-11-06 11:12:05.501551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.501894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.502998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.503973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.504979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.505011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.194 [2024-11-06 11:12:05.505042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.505996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.506975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.507991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.195 [2024-11-06 11:12:05.508592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.508982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.509970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.510743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.511989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.196 [2024-11-06 11:12:05.512470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.512987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.513977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.514985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.515713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.516293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.516324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.197 [2024-11-06 11:12:05.516354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.516990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.517985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.198 [2024-11-06 11:12:05.518984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.519981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.520976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.521973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.199 [2024-11-06 11:12:05.522555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.522584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.522613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.522653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.522687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 11:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:14.200 [2024-11-06 11:12:05.523849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.523972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 11:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:14.200 [2024-11-06 11:12:05.524216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.524981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.525989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.526022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.526055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.526085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.526117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.526147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.200 [2024-11-06 11:12:05.526179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.526973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.527974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.528993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.529675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.201 [2024-11-06 11:12:05.530345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.530985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.531974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.532987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.533019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.533048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.533092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.533122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.533154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.202 [2024-11-06 11:12:05.533183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.533983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.534449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.535998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.203 [2024-11-06 11:12:05.536896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.536923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.536957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.536989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.204 [2024-11-06 11:12:05.537645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.537744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.538989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.539976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.204 [2024-11-06 11:12:05.540660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.540988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.541770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.542993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.543984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.205 [2024-11-06 11:12:05.544365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.544966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.545984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.546998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.547999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.548033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.548064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.548089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.548123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.206 [2024-11-06 11:12:05.548158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.548974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.549994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.550998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.207 [2024-11-06 11:12:05.551251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.551999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.552996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.553995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.554967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.555009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.555038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.208 [2024-11-06 11:12:05.555066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.555991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.556969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.557977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.209 [2024-11-06 11:12:05.558650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.558990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.559994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.560996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.561990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.210 [2024-11-06 11:12:05.562298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.562972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.563966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.564970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.211 [2024-11-06 11:12:05.565309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.565994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.566997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.567915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.568988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.569021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.569050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.212 [2024-11-06 11:12:05.569085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.569977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.570985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.571973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.213 [2024-11-06 11:12:05.572908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.572940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.572975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.573982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.574830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.214 [2024-11-06 11:12:05.575478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.575997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.214 [2024-11-06 11:12:05.576676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.576985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.577988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.578976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.579561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.215 [2024-11-06 11:12:05.580464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.580978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.581988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.582989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.216 [2024-11-06 11:12:05.583363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.583965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.584982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.585981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.586998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.217 [2024-11-06 11:12:05.587036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.587975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.588912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.589985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.218 [2024-11-06 11:12:05.590826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.590859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.590891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.590924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.590956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.590989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.591020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.591053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.591085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.591117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.591141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.219 [2024-11-06 11:12:05.591172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.501 [2024-11-06 11:12:05.591869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.591900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.591933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.591991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.592987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.593737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.594974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.502 [2024-11-06 11:12:05.595502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.595986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.596910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.597999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.598970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.599009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.599040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.599073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.599115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.599144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.503 [2024-11-06 11:12:05.599176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.599995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.600988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.601970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.504 [2024-11-06 11:12:05.602739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.602775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.602806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.602853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.602884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.602920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.602954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.602985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.603971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.604990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.605973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.606014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.606047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.606078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.606110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.606141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.505 [2024-11-06 11:12:05.606175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.606985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.607992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.608976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.506 [2024-11-06 11:12:05.609280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.609979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.610259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.611977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.612996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.613031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.613059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.613229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 [2024-11-06 11:12:05.613260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.507 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.507 [2024-11-06 11:12:05.613306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.613989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.614983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.615968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.508 [2024-11-06 11:12:05.616853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.616881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.616914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.616940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.616965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.616989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.617994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.618993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.619904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.620497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.620530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.620558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.509 [2024-11-06 11:12:05.620593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.620991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.621984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.622993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.510 [2024-11-06 11:12:05.623453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.623997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.624317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.625993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.511 [2024-11-06 11:12:05.626913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.626949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.626993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.627974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.628982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.629991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.512 [2024-11-06 11:12:05.630692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.630967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.631986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.632972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.633976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.634006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.634874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.513 [2024-11-06 11:12:05.634907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.634937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.634971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.634998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.635980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.636967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.637957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.638019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.638048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.638078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.514 [2024-11-06 11:12:05.638107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.638983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.639998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.640961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.515 [2024-11-06 11:12:05.641396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.641985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.642996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.643980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.644989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.645019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.645056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.645089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.516 [2024-11-06 11:12:05.645124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.645998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.646837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.647983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.517 [2024-11-06 11:12:05.648788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.648961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.518 [2024-11-06 11:12:05.649619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.649967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.650979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.651742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.518 [2024-11-06 11:12:05.652717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.652977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.653967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.654990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.655914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.656296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.656329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.519 [2024-11-06 11:12:05.656359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.656983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.657991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.658989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.659555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.659592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.659624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.520 [2024-11-06 11:12:05.659650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.659996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.660993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.661973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.521 [2024-11-06 11:12:05.662535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.662817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.663972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.664999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.665987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.522 [2024-11-06 11:12:05.666493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.666978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.667629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.668999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.669973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.523 [2024-11-06 11:12:05.670003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.670981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.671989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.672428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.524 [2024-11-06 11:12:05.673268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.673989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.674980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.675812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.525 [2024-11-06 11:12:05.676966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.677995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.678989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 true 00:30:14.526 [2024-11-06 11:12:05.679535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.526 [2024-11-06 11:12:05.679633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.679659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.679693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.679723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.679758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.679787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.679822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.680972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.681998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.682973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.527 [2024-11-06 11:12:05.683002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.683985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.684983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.685971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.528 [2024-11-06 11:12:05.686569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.686598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.686627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.686655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.686687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.686728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.686775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.529 [2024-11-06 11:12:05.687820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.687981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.529 [2024-11-06 11:12:05.688959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.689985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.690976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.691973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.530 [2024-11-06 11:12:05.692571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.692983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.693973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.694978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.695981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.531 [2024-11-06 11:12:05.696011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.696995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.697982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.698752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.699343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.532 [2024-11-06 11:12:05.699379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.699974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.700977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.701976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.533 [2024-11-06 11:12:05.702397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.702995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.703978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.704967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.534 [2024-11-06 11:12:05.705392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.705958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.706991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 11:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:14.535 [2024-11-06 11:12:05.707669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.707982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 11:12:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.535 [2024-11-06 11:12:05.708041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.535 [2024-11-06 11:12:05.708421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.708454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.708486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.708514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.708910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.708944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.708975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.709998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.710902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.536 [2024-11-06 11:12:05.711915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.711947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.711977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.712977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.713982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.714978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.537 [2024-11-06 11:12:05.715537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.715923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.716982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.717969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.718997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.538 [2024-11-06 11:12:05.719249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.719984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.720979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.539 [2024-11-06 11:12:05.721874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.721904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.721932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.721961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.721990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.722860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.723972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.724994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.540 [2024-11-06 11:12:05.725332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.541 [2024-11-06 11:12:05.725793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.725983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.726986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.727474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.541 [2024-11-06 11:12:05.728858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.728890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.728921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.728951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.728978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.729921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.730982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.542 [2024-11-06 11:12:05.731837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.731865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.731899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.732998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.543 [2024-11-06 11:12:05.733544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.971363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.971529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.971649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.971789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.971920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.972938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.973920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.974907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.976464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.976608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.976732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.976861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.976983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.977958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.978078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.978205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.808 [2024-11-06 11:12:05.978327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.978437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.978559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.978677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.978802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.978926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.979958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.980968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.981934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.982912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.983760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.984277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.984391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.984500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.984606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.984716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.984837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.984948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.985952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.986930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.987933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.988917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.989030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.989135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.989244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.989351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.989458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.989564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.809 [2024-11-06 11:12:05.989677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.989785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.989895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.990974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.991086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.993994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.994957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.995948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.996994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.997991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.998961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:05.999919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:06.000003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:06.000083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:06.000163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.810 [2024-11-06 11:12:06.000252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.000916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.001923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.002979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.003035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.003092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.003147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.003200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.003252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.003313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.003373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.811 11:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.811 [2024-11-06 11:12:06.209163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.209995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.811 [2024-11-06 11:12:06.210733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.210765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.210795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.210828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.210858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.210919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.210948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.210980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.211969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.212985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.213993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.214025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.214055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.214084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.214118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.214147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.214175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.812 [2024-11-06 11:12:06.214208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.214975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.215980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.216997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.217027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.217059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.217096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.217127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.217156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.217188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.813 [2024-11-06 11:12:06.217238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.217836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.218974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.219986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.220976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.221005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 [2024-11-06 11:12:06.221032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.814 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:14.814 [2024-11-06 11:12:06.221063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.221972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.222615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.223983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:14.815 [2024-11-06 11:12:06.224350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.224929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.086 [2024-11-06 11:12:06.225717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.225961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.226985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.227981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.087 [2024-11-06 11:12:06.228962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.228997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.229588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.230992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.231976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.088 [2024-11-06 11:12:06.232699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.232987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.233985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.234970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.235997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.236021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.236044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.089 [2024-11-06 11:12:06.236068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.236974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.237978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.238879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.090 [2024-11-06 11:12:06.239707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.239979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.240980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.241971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.091 [2024-11-06 11:12:06.242673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.242980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.243971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.244981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 11:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:15.092 [2024-11-06 11:12:06.245110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 11:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:15.092 [2024-11-06 11:12:06.245493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.245797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.246108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.246140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.246172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.246204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.246235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.092 [2024-11-06 11:12:06.246260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.246982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.247990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.248970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.093 [2024-11-06 11:12:06.249951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.249985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.250998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.251982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.252791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.094 [2024-11-06 11:12:06.253819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.253850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.253879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.253907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.253942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.253974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.254982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.255996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.095 [2024-11-06 11:12:06.256694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.256984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.257580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:30:15.096 [2024-11-06 11:12:06.258391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.258977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.259978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.096 [2024-11-06 11:12:06.260697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.260975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.261970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.262830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.263984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.097 [2024-11-06 11:12:06.264226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.264943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.265588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.266979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.267977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.268006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.098 [2024-11-06 11:12:06.268043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.268976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.269612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.270967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.099 [2024-11-06 11:12:06.271719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.271971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.272985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.273983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.274992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.275022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.275048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.100 [2024-11-06 11:12:06.275080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.275995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.276993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.277995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.101 [2024-11-06 11:12:06.278367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.278974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.279983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.280973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.281974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.282006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.282039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.282070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.282101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.282131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.282163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.102 [2024-11-06 11:12:06.282195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.282996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.283644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.284999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.103 [2024-11-06 11:12:06.285878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.285904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.285938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.285970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.286968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.287979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.288487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.104 [2024-11-06 11:12:06.289394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.289980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 [2024-11-06 11:12:06.290849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:30:15.105 true 00:30:15.105 11:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:15.105 11:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.045 11:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.304 11:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:16.304 11:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:16.304 true 00:30:16.304 11:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:16.304 11:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.563 11:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.824 11:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:16.824 11:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:16.824 true 00:30:17.084 11:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:17.084 11:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.025 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.285 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:18.285 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:18.545 true 00:30:18.545 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:18.545 11:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.485 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.485 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:19.485 11:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:19.746 true 00:30:19.746 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:19.746 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.033 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.033 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:20.033 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:20.293 true 00:30:20.293 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:20.293 11:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.675 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.675 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:21.675 11:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:21.675 true 00:30:21.936 11:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:21.936 11:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:22.764 11:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:22.764 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:22.764 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:23.024 true 00:30:23.024 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:23.024 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.284 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.284 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:23.284 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:23.544 true 00:30:23.544 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:23.544 11:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.805 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.805 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:23.805 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:24.064 true 00:30:24.064 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:24.064 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.325 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.585 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:24.585 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:24.585 true 00:30:24.586 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:24.586 11:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.846 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.106 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:25.106 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:25.106 true 00:30:25.106 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:25.107 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.366 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.627 Initializing NVMe Controllers 00:30:25.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.627 Controller IO queue size 128, less than required. 00:30:25.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.627 Controller IO queue size 128, less than required. 00:30:25.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:25.627 Initialization complete. Launching workers. 00:30:25.627 ======================================================== 00:30:25.627 Latency(us) 00:30:25.627 Device Information : IOPS MiB/s Average min max 00:30:25.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2602.07 1.27 29764.26 1947.71 1072678.43 00:30:25.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17712.76 8.65 7226.55 1521.65 437187.59 00:30:25.627 ======================================================== 00:30:25.627 Total : 20314.83 9.92 10113.34 1521.65 1072678.43 00:30:25.627 00:30:25.627 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:25.627 11:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:25.627 true 00:30:25.627 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3455929 00:30:25.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3455929) - No such process 00:30:25.627 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3455929 00:30:25.627 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.888 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:26.148 null0 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:26.148 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:26.409 null1 00:30:26.409 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:26.409 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:26.409 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:26.409 null2 00:30:26.670 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:26.670 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:26.670 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:26.670 null3 00:30:26.670 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:26.670 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:26.670 11:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:26.931 null4 00:30:26.931 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:26.931 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:26.931 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:26.931 null5 00:30:26.931 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:26.931 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:26.931 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:27.192 null6 00:30:27.192 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:27.192 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:27.192 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:27.455 null7 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3462653 3462654 3462658 3462660 3462663 3462666 3462669 3462671 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.455 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.456 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:27.456 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:27.717 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:27.717 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.717 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:27.717 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:27.717 11:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.717 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.979 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:28.240 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.501 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:28.762 11:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:28.762 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:29.022 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:29.023 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.284 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:29.544 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:29.545 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:29.545 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.545 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.545 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:29.545 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.545 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.545 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:29.805 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.805 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.805 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:29.805 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.805 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.805 11:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:29.805 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.066 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.328 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:30.589 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:30.590 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:30.590 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:30.590 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:30.590 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.590 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.590 11:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.851 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.852 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.113 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.374 rmmod nvme_tcp 00:30:31.374 rmmod nvme_fabrics 00:30:31.374 rmmod nvme_keyring 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3455535 ']' 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3455535 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3455535 ']' 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3455535 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3455535 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3455535' 00:30:31.374 killing process with pid 3455535 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3455535 00:30:31.374 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3455535 00:30:31.635 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.635 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.635 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.635 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.636 11:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.547 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.547 00:30:33.547 real 0m48.148s 00:30:33.547 user 2m57.397s 00:30:33.547 sys 0m20.218s 00:30:33.547 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.547 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:33.547 ************************************ 00:30:33.547 END TEST nvmf_ns_hotplug_stress 00:30:33.547 ************************************ 00:30:33.547 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:33.547 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:33.547 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:33.547 11:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:33.808 ************************************ 00:30:33.808 START TEST nvmf_delete_subsystem 00:30:33.808 ************************************ 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:33.808 * Looking for test storage... 00:30:33.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.808 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:33.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.809 --rc genhtml_branch_coverage=1 00:30:33.809 --rc genhtml_function_coverage=1 00:30:33.809 --rc genhtml_legend=1 00:30:33.809 --rc geninfo_all_blocks=1 00:30:33.809 --rc geninfo_unexecuted_blocks=1 00:30:33.809 00:30:33.809 ' 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:33.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.809 --rc genhtml_branch_coverage=1 00:30:33.809 --rc genhtml_function_coverage=1 00:30:33.809 --rc genhtml_legend=1 00:30:33.809 --rc geninfo_all_blocks=1 00:30:33.809 --rc geninfo_unexecuted_blocks=1 00:30:33.809 00:30:33.809 ' 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:33.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.809 --rc genhtml_branch_coverage=1 00:30:33.809 --rc genhtml_function_coverage=1 00:30:33.809 --rc genhtml_legend=1 00:30:33.809 --rc geninfo_all_blocks=1 00:30:33.809 --rc geninfo_unexecuted_blocks=1 00:30:33.809 00:30:33.809 ' 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:33.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.809 --rc genhtml_branch_coverage=1 00:30:33.809 --rc genhtml_function_coverage=1 00:30:33.809 --rc genhtml_legend=1 00:30:33.809 --rc geninfo_all_blocks=1 00:30:33.809 --rc geninfo_unexecuted_blocks=1 00:30:33.809 00:30:33.809 ' 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.809 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:34.070 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:34.071 11:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:40.735 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:40.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:40.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:40.736 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:40.736 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.736 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:30:41.021 00:30:41.021 --- 10.0.0.2 ping statistics --- 00:30:41.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.021 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:30:41.021 00:30:41.021 --- 10.0.0.1 ping statistics --- 00:30:41.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.021 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3467749 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3467749 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3467749 ']' 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:41.021 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.021 [2024-11-06 11:12:32.439576] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:41.021 [2024-11-06 11:12:32.440544] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:30:41.021 [2024-11-06 11:12:32.440586] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.282 [2024-11-06 11:12:32.518323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:41.282 [2024-11-06 11:12:32.553192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.282 [2024-11-06 11:12:32.553226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.282 [2024-11-06 11:12:32.553234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.282 [2024-11-06 11:12:32.553241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.282 [2024-11-06 11:12:32.553247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.282 [2024-11-06 11:12:32.554376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.282 [2024-11-06 11:12:32.554379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.282 [2024-11-06 11:12:32.609033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:41.282 [2024-11-06 11:12:32.609505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:41.282 [2024-11-06 11:12:32.609868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.282 [2024-11-06 11:12:32.675260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.282 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.543 [2024-11-06 11:12:32.703678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.543 NULL1 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.543 Delay0 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.543 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.544 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.544 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3467772 00:30:41.544 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:41.544 11:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:41.544 [2024-11-06 11:12:32.798077] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:43.455 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.455 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.455 11:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 [2024-11-06 11:12:34.920129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56680 is same with the state(6) to be set 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 Write completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 starting I/O failed: -6 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.717 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 starting I/O failed: -6 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 [2024-11-06 11:12:34.923423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f562000d490 is same with the state(6) to be set 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Write completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:43.718 Read completed with error (sct=0, sc=8) 00:30:44.659 [2024-11-06 11:12:35.898878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d579a0 is same with the state(6) to be set 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 [2024-11-06 11:12:35.924115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56860 is same with the state(6) to be set 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Write completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 Read completed with error (sct=0, sc=8) 00:30:44.659 [2024-11-06 11:12:35.924207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d564a0 is same with the state(6) to be set 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 [2024-11-06 11:12:35.925891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f562000d7c0 is same with the state(6) to be set 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Write completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 Read completed with error (sct=0, sc=8) 00:30:44.660 [2024-11-06 11:12:35.925991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f562000d020 is same with the state(6) to be set 00:30:44.660 Initializing NVMe Controllers 00:30:44.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.660 Controller IO queue size 128, less than required. 00:30:44.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:44.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:44.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:44.660 Initialization complete. Launching workers. 00:30:44.660 ======================================================== 00:30:44.660 Latency(us) 00:30:44.660 Device Information : IOPS MiB/s Average min max 00:30:44.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.28 0.08 892302.60 217.91 1007540.02 00:30:44.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.81 0.08 949188.69 269.39 2001761.60 00:30:44.660 ======================================================== 00:30:44.660 Total : 333.09 0.16 920107.91 217.91 2001761.60 00:30:44.660 00:30:44.660 [2024-11-06 11:12:35.926496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d579a0 (9): Bad file descriptor 00:30:44.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:44.660 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.660 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:44.660 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3467772 00:30:44.660 11:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3467772 00:30:45.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3467772) - No such process 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3467772 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3467772 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3467772 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.231 [2024-11-06 11:12:36.459516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.231 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3468466 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:45.232 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:45.232 [2024-11-06 11:12:36.530959] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:45.803 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:45.803 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:45.803 11:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:46.373 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:46.373 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:46.373 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:46.634 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:46.634 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:46.634 11:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:47.206 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:47.206 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:47.206 11:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:47.777 11:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:47.777 11:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:47.777 11:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:48.347 11:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:48.348 11:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:48.348 11:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:48.348 Initializing NVMe Controllers 00:30:48.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.348 Controller IO queue size 128, less than required. 00:30:48.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:48.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:48.348 Initialization complete. Launching workers. 00:30:48.348 ======================================================== 00:30:48.348 Latency(us) 00:30:48.348 Device Information : IOPS MiB/s Average min max 00:30:48.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002237.71 1000238.32 1006016.75 00:30:48.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004217.81 1000354.97 1011481.48 00:30:48.348 ======================================================== 00:30:48.348 Total : 256.00 0.12 1003227.76 1000238.32 1011481.48 00:30:48.348 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3468466 00:30:48.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3468466) - No such process 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3468466 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:48.609 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:48.609 rmmod nvme_tcp 00:30:48.868 rmmod nvme_fabrics 00:30:48.868 rmmod nvme_keyring 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3467749 ']' 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3467749 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3467749 ']' 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3467749 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3467749 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3467749' 00:30:48.868 killing process with pid 3467749 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3467749 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3467749 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:48.868 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.129 11:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.042 00:30:51.042 real 0m17.370s 00:30:51.042 user 0m26.436s 00:30:51.042 sys 0m7.039s 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.042 ************************************ 00:30:51.042 END TEST nvmf_delete_subsystem 00:30:51.042 ************************************ 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:51.042 ************************************ 00:30:51.042 START TEST nvmf_host_management 00:30:51.042 ************************************ 00:30:51.042 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:51.303 * Looking for test storage... 00:30:51.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:51.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.303 --rc genhtml_branch_coverage=1 00:30:51.303 --rc genhtml_function_coverage=1 00:30:51.303 --rc genhtml_legend=1 00:30:51.303 --rc geninfo_all_blocks=1 00:30:51.303 --rc geninfo_unexecuted_blocks=1 00:30:51.303 00:30:51.303 ' 00:30:51.303 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:51.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.303 --rc genhtml_branch_coverage=1 00:30:51.303 --rc genhtml_function_coverage=1 00:30:51.303 --rc genhtml_legend=1 00:30:51.303 --rc geninfo_all_blocks=1 00:30:51.303 --rc geninfo_unexecuted_blocks=1 00:30:51.303 00:30:51.303 ' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:51.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.304 --rc genhtml_branch_coverage=1 00:30:51.304 --rc genhtml_function_coverage=1 00:30:51.304 --rc genhtml_legend=1 00:30:51.304 --rc geninfo_all_blocks=1 00:30:51.304 --rc geninfo_unexecuted_blocks=1 00:30:51.304 00:30:51.304 ' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:51.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.304 --rc genhtml_branch_coverage=1 00:30:51.304 --rc genhtml_function_coverage=1 00:30:51.304 --rc genhtml_legend=1 00:30:51.304 --rc geninfo_all_blocks=1 00:30:51.304 --rc geninfo_unexecuted_blocks=1 00:30:51.304 00:30:51.304 ' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.304 11:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:59.450 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:59.450 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.450 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:59.451 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:59.451 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:30:59.451 00:30:59.451 --- 10.0.0.2 ping statistics --- 00:30:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.451 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:30:59.451 11:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:30:59.451 00:30:59.451 --- 10.0.0.1 ping statistics --- 00:30:59.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.451 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3473440 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3473440 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3473440 ']' 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:59.451 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.451 [2024-11-06 11:12:50.120555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:59.451 [2024-11-06 11:12:50.121713] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:30:59.451 [2024-11-06 11:12:50.121775] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.451 [2024-11-06 11:12:50.223162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:59.451 [2024-11-06 11:12:50.276575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.451 [2024-11-06 11:12:50.276632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.451 [2024-11-06 11:12:50.276641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.451 [2024-11-06 11:12:50.276648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.451 [2024-11-06 11:12:50.276658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.451 [2024-11-06 11:12:50.278662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.451 [2024-11-06 11:12:50.278823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:59.451 [2024-11-06 11:12:50.279034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:59.451 [2024-11-06 11:12:50.279035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.451 [2024-11-06 11:12:50.355523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:59.452 [2024-11-06 11:12:50.356217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:59.452 [2024-11-06 11:12:50.356766] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:59.452 [2024-11-06 11:12:50.357152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:59.452 [2024-11-06 11:12:50.357222] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.714 11:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 [2024-11-06 11:12:50.984062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 Malloc0 00:30:59.714 [2024-11-06 11:12:51.076370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:59.714 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.974 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3473609 00:30:59.974 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3473609 /var/tmp/bdevperf.sock 00:30:59.974 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3473609 ']' 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:59.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:59.975 { 00:30:59.975 "params": { 00:30:59.975 "name": "Nvme$subsystem", 00:30:59.975 "trtype": "$TEST_TRANSPORT", 00:30:59.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:59.975 "adrfam": "ipv4", 00:30:59.975 "trsvcid": "$NVMF_PORT", 00:30:59.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:59.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:59.975 "hdgst": ${hdgst:-false}, 00:30:59.975 "ddgst": ${ddgst:-false} 00:30:59.975 }, 00:30:59.975 "method": "bdev_nvme_attach_controller" 00:30:59.975 } 00:30:59.975 EOF 00:30:59.975 )") 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:59.975 11:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:59.975 "params": { 00:30:59.975 "name": "Nvme0", 00:30:59.975 "trtype": "tcp", 00:30:59.975 "traddr": "10.0.0.2", 00:30:59.975 "adrfam": "ipv4", 00:30:59.975 "trsvcid": "4420", 00:30:59.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.975 "hdgst": false, 00:30:59.975 "ddgst": false 00:30:59.975 }, 00:30:59.975 "method": "bdev_nvme_attach_controller" 00:30:59.975 }' 00:30:59.975 [2024-11-06 11:12:51.183700] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:30:59.975 [2024-11-06 11:12:51.183761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473609 ] 00:30:59.975 [2024-11-06 11:12:51.254890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.975 [2024-11-06 11:12:51.291141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.235 Running I/O for 10 seconds... 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.808 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:00.808 [2024-11-06 11:12:52.075661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1f2a0 is same with the state(6) to be set 00:31:00.808 [2024-11-06 11:12:52.075883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.808 [2024-11-06 11:12:52.075924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.808 [2024-11-06 11:12:52.075948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.808 [2024-11-06 11:12:52.075957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.808 [2024-11-06 11:12:52.075967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.808 [2024-11-06 11:12:52.075975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.808 [2024-11-06 11:12:52.075984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.808 [2024-11-06 11:12:52.075992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.808 [2024-11-06 11:12:52.076001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.808 [2024-11-06 11:12:52.076009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.809 [2024-11-06 11:12:52.076668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.809 [2024-11-06 11:12:52.076678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.076987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.076994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.077004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.810 [2024-11-06 11:12:52.077013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.077022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ae1f0 is same with the state(6) to be set 00:31:00.810 [2024-11-06 11:12:52.078264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:00.810 task offset: 111104 on job bdev=Nvme0n1 fails 00:31:00.810 00:31:00.810 Latency(us) 00:31:00.810 [2024-11-06T10:12:52.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.810 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:00.810 Job: Nvme0n1 ended in about 0.60 seconds with error 00:31:00.810 Verification LBA range: start 0x0 length 0x400 00:31:00.810 Nvme0n1 : 0.60 1380.95 86.31 106.23 0.00 42063.26 1583.79 36700.16 00:31:00.810 [2024-11-06T10:12:52.232Z] =================================================================================================================== 00:31:00.810 [2024-11-06T10:12:52.232Z] Total : 1380.95 86.31 106.23 0.00 42063.26 1583.79 36700.16 00:31:00.810 [2024-11-06 11:12:52.080283] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:00.810 [2024-11-06 11:12:52.080306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1795000 (9): Bad file descriptor 00:31:00.810 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.810 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:00.810 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.810 [2024-11-06 11:12:52.081591] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:00.810 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:00.810 [2024-11-06 11:12:52.081665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:00.810 [2024-11-06 11:12:52.081695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.810 [2024-11-06 11:12:52.081713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:00.810 [2024-11-06 11:12:52.081722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:00.810 [2024-11-06 11:12:52.081731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.810 [2024-11-06 11:12:52.081739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1795000 00:31:00.810 [2024-11-06 11:12:52.081767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1795000 (9): Bad file descriptor 00:31:00.810 [2024-11-06 11:12:52.081781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:00.810 [2024-11-06 11:12:52.081789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:00.810 [2024-11-06 11:12:52.081799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:00.810 [2024-11-06 11:12:52.081808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:00.810 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.810 11:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3473609 00:31:01.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3473609) - No such process 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:01.751 { 00:31:01.751 "params": { 00:31:01.751 "name": "Nvme$subsystem", 00:31:01.751 "trtype": "$TEST_TRANSPORT", 00:31:01.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.751 "adrfam": "ipv4", 00:31:01.751 "trsvcid": "$NVMF_PORT", 00:31:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.751 "hdgst": ${hdgst:-false}, 00:31:01.751 "ddgst": ${ddgst:-false} 00:31:01.751 }, 00:31:01.751 "method": "bdev_nvme_attach_controller" 00:31:01.751 } 00:31:01.751 EOF 00:31:01.751 )") 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:01.751 11:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:01.751 "params": { 00:31:01.751 "name": "Nvme0", 00:31:01.751 "trtype": "tcp", 00:31:01.751 "traddr": "10.0.0.2", 00:31:01.751 "adrfam": "ipv4", 00:31:01.751 "trsvcid": "4420", 00:31:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.751 "hdgst": false, 00:31:01.751 "ddgst": false 00:31:01.751 }, 00:31:01.751 "method": "bdev_nvme_attach_controller" 00:31:01.751 }' 00:31:01.751 [2024-11-06 11:12:53.162552] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:31:01.751 [2024-11-06 11:12:53.162625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474033 ] 00:31:02.011 [2024-11-06 11:12:53.235392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.011 [2024-11-06 11:12:53.270920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.011 Running I/O for 1 seconds... 00:31:03.394 1750.00 IOPS, 109.38 MiB/s 00:31:03.394 Latency(us) 00:31:03.394 [2024-11-06T10:12:54.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.394 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.394 Verification LBA range: start 0x0 length 0x400 00:31:03.394 Nvme0n1 : 1.02 1794.55 112.16 0.00 0.00 34912.92 3167.57 36918.61 00:31:03.394 [2024-11-06T10:12:54.816Z] =================================================================================================================== 00:31:03.394 [2024-11-06T10:12:54.816Z] Total : 1794.55 112.16 0.00 0.00 34912.92 3167.57 36918.61 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.394 rmmod nvme_tcp 00:31:03.394 rmmod nvme_fabrics 00:31:03.394 rmmod nvme_keyring 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3473440 ']' 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3473440 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3473440 ']' 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3473440 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3473440 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3473440' 00:31:03.394 killing process with pid 3473440 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3473440 00:31:03.394 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3473440 00:31:03.394 [2024-11-06 11:12:54.807241] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.655 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.656 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.656 11:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:05.569 00:31:05.569 real 0m14.471s 00:31:05.569 user 0m18.768s 00:31:05.569 sys 0m7.442s 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:05.569 ************************************ 00:31:05.569 END TEST nvmf_host_management 00:31:05.569 ************************************ 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:05.569 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:05.830 ************************************ 00:31:05.830 START TEST nvmf_lvol 00:31:05.830 ************************************ 00:31:05.830 11:12:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:05.830 * Looking for test storage... 00:31:05.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:05.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.830 --rc genhtml_branch_coverage=1 00:31:05.830 --rc genhtml_function_coverage=1 00:31:05.830 --rc genhtml_legend=1 00:31:05.830 --rc geninfo_all_blocks=1 00:31:05.830 --rc geninfo_unexecuted_blocks=1 00:31:05.830 00:31:05.830 ' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:05.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.830 --rc genhtml_branch_coverage=1 00:31:05.830 --rc genhtml_function_coverage=1 00:31:05.830 --rc genhtml_legend=1 00:31:05.830 --rc geninfo_all_blocks=1 00:31:05.830 --rc geninfo_unexecuted_blocks=1 00:31:05.830 00:31:05.830 ' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:05.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.830 --rc genhtml_branch_coverage=1 00:31:05.830 --rc genhtml_function_coverage=1 00:31:05.830 --rc genhtml_legend=1 00:31:05.830 --rc geninfo_all_blocks=1 00:31:05.830 --rc geninfo_unexecuted_blocks=1 00:31:05.830 00:31:05.830 ' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:05.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.830 --rc genhtml_branch_coverage=1 00:31:05.830 --rc genhtml_function_coverage=1 00:31:05.830 --rc genhtml_legend=1 00:31:05.830 --rc geninfo_all_blocks=1 00:31:05.830 --rc geninfo_unexecuted_blocks=1 00:31:05.830 00:31:05.830 ' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.830 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:05.831 11:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:13.975 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:13.975 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.975 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:13.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:13.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:13.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:31:13.976 00:31:13.976 --- 10.0.0.2 ping statistics --- 00:31:13.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.976 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:31:13.976 00:31:13.976 --- 10.0.0.1 ping statistics --- 00:31:13.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.976 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3478502 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3478502 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3478502 ']' 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:13.976 11:13:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:13.976 [2024-11-06 11:13:04.602642] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:13.976 [2024-11-06 11:13:04.603615] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:31:13.976 [2024-11-06 11:13:04.603655] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.976 [2024-11-06 11:13:04.681074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:13.976 [2024-11-06 11:13:04.716816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.976 [2024-11-06 11:13:04.716847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.976 [2024-11-06 11:13:04.716855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.976 [2024-11-06 11:13:04.716862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.976 [2024-11-06 11:13:04.716868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.976 [2024-11-06 11:13:04.718113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.976 [2024-11-06 11:13:04.718244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.976 [2024-11-06 11:13:04.718247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.976 [2024-11-06 11:13:04.772773] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:13.977 [2024-11-06 11:13:04.773176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:13.977 [2024-11-06 11:13:04.773554] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:13.977 [2024-11-06 11:13:04.773848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:14.238 [2024-11-06 11:13:05.599112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.238 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.499 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:14.499 11:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.761 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:14.761 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:15.069 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:15.069 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=75db598e-0431-468e-81b1-352944ac6c00 00:31:15.069 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 75db598e-0431-468e-81b1-352944ac6c00 lvol 20 00:31:15.329 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d6a4b286-dcaf-409a-8575-e1041ef5ff56 00:31:15.329 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:15.329 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6a4b286-dcaf-409a-8575-e1041ef5ff56 00:31:15.590 11:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.851 [2024-11-06 11:13:07.042881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.851 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:15.851 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3478989 00:31:15.851 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:15.851 11:13:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:17.235 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d6a4b286-dcaf-409a-8575-e1041ef5ff56 MY_SNAPSHOT 00:31:17.235 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1fad6646-1744-4f69-826b-4a1462615b34 00:31:17.235 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d6a4b286-dcaf-409a-8575-e1041ef5ff56 30 00:31:17.496 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1fad6646-1744-4f69-826b-4a1462615b34 MY_CLONE 00:31:17.496 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4772249a-229f-4b2a-93e3-f59ee845464d 00:31:17.496 11:13:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4772249a-229f-4b2a-93e3-f59ee845464d 00:31:18.069 11:13:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3478989 00:31:26.211 Initializing NVMe Controllers 00:31:26.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:26.211 Controller IO queue size 128, less than required. 00:31:26.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:26.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:26.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:26.211 Initialization complete. Launching workers. 00:31:26.211 ======================================================== 00:31:26.211 Latency(us) 00:31:26.211 Device Information : IOPS MiB/s Average min max 00:31:26.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14975.50 58.50 8548.80 1489.20 55953.27 00:31:26.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12269.60 47.93 10432.63 4115.72 57507.32 00:31:26.211 ======================================================== 00:31:26.211 Total : 27245.10 106.43 9397.17 1489.20 57507.32 00:31:26.211 00:31:26.211 11:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.473 11:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6a4b286-dcaf-409a-8575-e1041ef5ff56 00:31:26.473 11:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 75db598e-0431-468e-81b1-352944ac6c00 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.734 rmmod nvme_tcp 00:31:26.734 rmmod nvme_fabrics 00:31:26.734 rmmod nvme_keyring 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3478502 ']' 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3478502 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3478502 ']' 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3478502 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:26.734 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3478502 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3478502' 00:31:26.995 killing process with pid 3478502 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3478502 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3478502 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:26.995 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:26.996 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.996 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:26.996 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.996 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.996 11:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.547 00:31:29.547 real 0m23.401s 00:31:29.547 user 0m55.217s 00:31:29.547 sys 0m10.336s 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:29.547 ************************************ 00:31:29.547 END TEST nvmf_lvol 00:31:29.547 ************************************ 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:29.547 ************************************ 00:31:29.547 START TEST nvmf_lvs_grow 00:31:29.547 ************************************ 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:29.547 * Looking for test storage... 00:31:29.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:29.547 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.548 --rc genhtml_branch_coverage=1 00:31:29.548 --rc genhtml_function_coverage=1 00:31:29.548 --rc genhtml_legend=1 00:31:29.548 --rc geninfo_all_blocks=1 00:31:29.548 --rc geninfo_unexecuted_blocks=1 00:31:29.548 00:31:29.548 ' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.548 --rc genhtml_branch_coverage=1 00:31:29.548 --rc genhtml_function_coverage=1 00:31:29.548 --rc genhtml_legend=1 00:31:29.548 --rc geninfo_all_blocks=1 00:31:29.548 --rc geninfo_unexecuted_blocks=1 00:31:29.548 00:31:29.548 ' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.548 --rc genhtml_branch_coverage=1 00:31:29.548 --rc genhtml_function_coverage=1 00:31:29.548 --rc genhtml_legend=1 00:31:29.548 --rc geninfo_all_blocks=1 00:31:29.548 --rc geninfo_unexecuted_blocks=1 00:31:29.548 00:31:29.548 ' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:29.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.548 --rc genhtml_branch_coverage=1 00:31:29.548 --rc genhtml_function_coverage=1 00:31:29.548 --rc genhtml_legend=1 00:31:29.548 --rc geninfo_all_blocks=1 00:31:29.548 --rc geninfo_unexecuted_blocks=1 00:31:29.548 00:31:29.548 ' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:29.548 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.549 11:13:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:36.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:36.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.143 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:36.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:36.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:31:36.144 00:31:36.144 --- 10.0.0.2 ping statistics --- 00:31:36.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.144 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:31:36.144 00:31:36.144 --- 10.0.0.1 ping statistics --- 00:31:36.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.144 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.144 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3485211 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3485211 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3485211 ']' 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:36.405 11:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.405 [2024-11-06 11:13:27.640638] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.405 [2024-11-06 11:13:27.641787] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:31:36.405 [2024-11-06 11:13:27.641840] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.405 [2024-11-06 11:13:27.725043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.405 [2024-11-06 11:13:27.765248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.405 [2024-11-06 11:13:27.765281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.405 [2024-11-06 11:13:27.765289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.405 [2024-11-06 11:13:27.765296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.405 [2024-11-06 11:13:27.765302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.405 [2024-11-06 11:13:27.765876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.405 [2024-11-06 11:13:27.821504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.405 [2024-11-06 11:13:27.821794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:37.347 [2024-11-06 11:13:28.634352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:37.347 ************************************ 00:31:37.347 START TEST lvs_grow_clean 00:31:37.347 ************************************ 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:37.347 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:37.608 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:37.608 11:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:37.868 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=75385606-ea72-4cad-a563-1da652970fe2 00:31:37.868 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:37.868 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:38.127 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:38.127 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:38.128 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 75385606-ea72-4cad-a563-1da652970fe2 lvol 150 00:31:38.128 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7d09811-a588-4f1c-a751-fe5169f7d0f3 00:31:38.128 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:38.128 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:38.388 [2024-11-06 11:13:29.638330] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:38.388 [2024-11-06 11:13:29.638478] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:38.388 true 00:31:38.388 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:38.388 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:38.648 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:38.648 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:38.648 11:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7d09811-a588-4f1c-a751-fe5169f7d0f3 00:31:38.908 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.908 [2024-11-06 11:13:30.306948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.908 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3485644 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3485644 /var/tmp/bdevperf.sock 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3485644 ']' 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:39.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:39.168 11:13:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:39.168 [2024-11-06 11:13:30.540324] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:31:39.168 [2024-11-06 11:13:30.540379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485644 ] 00:31:39.429 [2024-11-06 11:13:30.628114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.429 [2024-11-06 11:13:30.664884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.999 11:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:39.999 11:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:31:39.999 11:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:40.261 Nvme0n1 00:31:40.261 11:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:40.521 [ 00:31:40.521 { 00:31:40.521 "name": "Nvme0n1", 00:31:40.521 "aliases": [ 00:31:40.521 "e7d09811-a588-4f1c-a751-fe5169f7d0f3" 00:31:40.521 ], 00:31:40.521 "product_name": "NVMe disk", 00:31:40.521 "block_size": 4096, 00:31:40.521 "num_blocks": 38912, 00:31:40.521 "uuid": "e7d09811-a588-4f1c-a751-fe5169f7d0f3", 00:31:40.521 "numa_id": 0, 00:31:40.521 "assigned_rate_limits": { 00:31:40.521 "rw_ios_per_sec": 0, 00:31:40.521 "rw_mbytes_per_sec": 0, 00:31:40.521 "r_mbytes_per_sec": 0, 00:31:40.521 "w_mbytes_per_sec": 0 00:31:40.521 }, 00:31:40.521 "claimed": false, 00:31:40.521 "zoned": false, 00:31:40.521 "supported_io_types": { 00:31:40.521 "read": true, 00:31:40.521 "write": true, 00:31:40.521 "unmap": true, 00:31:40.521 "flush": true, 00:31:40.521 "reset": true, 00:31:40.521 "nvme_admin": true, 00:31:40.521 "nvme_io": true, 00:31:40.521 "nvme_io_md": false, 00:31:40.521 "write_zeroes": true, 00:31:40.521 "zcopy": false, 00:31:40.521 "get_zone_info": false, 00:31:40.521 "zone_management": false, 00:31:40.521 "zone_append": false, 00:31:40.521 "compare": true, 00:31:40.521 "compare_and_write": true, 00:31:40.521 "abort": true, 00:31:40.521 "seek_hole": false, 00:31:40.521 "seek_data": false, 00:31:40.521 "copy": true, 00:31:40.521 "nvme_iov_md": false 00:31:40.521 }, 00:31:40.521 "memory_domains": [ 00:31:40.521 { 00:31:40.521 "dma_device_id": "system", 00:31:40.521 "dma_device_type": 1 00:31:40.521 } 00:31:40.521 ], 00:31:40.521 "driver_specific": { 00:31:40.521 "nvme": [ 00:31:40.521 { 00:31:40.521 "trid": { 00:31:40.521 "trtype": "TCP", 00:31:40.521 "adrfam": "IPv4", 00:31:40.521 "traddr": "10.0.0.2", 00:31:40.521 "trsvcid": "4420", 00:31:40.521 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:40.521 }, 00:31:40.521 "ctrlr_data": { 00:31:40.521 "cntlid": 1, 00:31:40.521 "vendor_id": "0x8086", 00:31:40.521 "model_number": "SPDK bdev Controller", 00:31:40.521 "serial_number": "SPDK0", 00:31:40.521 "firmware_revision": "25.01", 00:31:40.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.521 "oacs": { 00:31:40.521 "security": 0, 00:31:40.521 "format": 0, 00:31:40.521 "firmware": 0, 00:31:40.521 "ns_manage": 0 00:31:40.521 }, 00:31:40.521 "multi_ctrlr": true, 00:31:40.521 "ana_reporting": false 00:31:40.521 }, 00:31:40.521 "vs": { 00:31:40.521 "nvme_version": "1.3" 00:31:40.521 }, 00:31:40.521 "ns_data": { 00:31:40.521 "id": 1, 00:31:40.521 "can_share": true 00:31:40.521 } 00:31:40.521 } 00:31:40.521 ], 00:31:40.521 "mp_policy": "active_passive" 00:31:40.521 } 00:31:40.521 } 00:31:40.521 ] 00:31:40.521 11:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3485939 00:31:40.521 11:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:40.521 11:13:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:40.521 Running I/O for 10 seconds... 00:31:41.553 Latency(us) 00:31:41.553 [2024-11-06T10:13:32.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.553 Nvme0n1 : 1.00 17781.00 69.46 0.00 0.00 0.00 0.00 0.00 00:31:41.553 [2024-11-06T10:13:32.975Z] =================================================================================================================== 00:31:41.553 [2024-11-06T10:13:32.975Z] Total : 17781.00 69.46 0.00 0.00 0.00 0.00 0.00 00:31:41.553 00:31:42.493 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:42.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.493 Nvme0n1 : 2.00 17876.00 69.83 0.00 0.00 0.00 0.00 0.00 00:31:42.493 [2024-11-06T10:13:33.915Z] =================================================================================================================== 00:31:42.493 [2024-11-06T10:13:33.915Z] Total : 17876.00 69.83 0.00 0.00 0.00 0.00 0.00 00:31:42.493 00:31:42.753 true 00:31:42.753 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:42.753 11:13:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:42.753 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:42.753 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:42.753 11:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3485939 00:31:43.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.693 Nvme0n1 : 3.00 17923.67 70.01 0.00 0.00 0.00 0.00 0.00 00:31:43.693 [2024-11-06T10:13:35.116Z] =================================================================================================================== 00:31:43.694 [2024-11-06T10:13:35.116Z] Total : 17923.67 70.01 0.00 0.00 0.00 0.00 0.00 00:31:43.694 00:31:44.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:44.634 Nvme0n1 : 4.00 17951.25 70.12 0.00 0.00 0.00 0.00 0.00 00:31:44.634 [2024-11-06T10:13:36.056Z] =================================================================================================================== 00:31:44.634 [2024-11-06T10:13:36.057Z] Total : 17951.25 70.12 0.00 0.00 0.00 0.00 0.00 00:31:44.635 00:31:45.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.575 Nvme0n1 : 5.00 17980.60 70.24 0.00 0.00 0.00 0.00 0.00 00:31:45.575 [2024-11-06T10:13:36.997Z] =================================================================================================================== 00:31:45.575 [2024-11-06T10:13:36.997Z] Total : 17980.60 70.24 0.00 0.00 0.00 0.00 0.00 00:31:45.575 00:31:46.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.515 Nvme0n1 : 6.00 18000.00 70.31 0.00 0.00 0.00 0.00 0.00 00:31:46.515 [2024-11-06T10:13:37.937Z] =================================================================================================================== 00:31:46.515 [2024-11-06T10:13:37.937Z] Total : 18000.00 70.31 0.00 0.00 0.00 0.00 0.00 00:31:46.515 00:31:47.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.896 Nvme0n1 : 7.00 18023.00 70.40 0.00 0.00 0.00 0.00 0.00 00:31:47.896 [2024-11-06T10:13:39.318Z] =================================================================================================================== 00:31:47.896 [2024-11-06T10:13:39.318Z] Total : 18023.00 70.40 0.00 0.00 0.00 0.00 0.00 00:31:47.896 00:31:48.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.837 Nvme0n1 : 8.00 18024.38 70.41 0.00 0.00 0.00 0.00 0.00 00:31:48.837 [2024-11-06T10:13:40.259Z] =================================================================================================================== 00:31:48.837 [2024-11-06T10:13:40.259Z] Total : 18024.38 70.41 0.00 0.00 0.00 0.00 0.00 00:31:48.837 00:31:49.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.780 Nvme0n1 : 9.00 18039.56 70.47 0.00 0.00 0.00 0.00 0.00 00:31:49.780 [2024-11-06T10:13:41.202Z] =================================================================================================================== 00:31:49.780 [2024-11-06T10:13:41.202Z] Total : 18039.56 70.47 0.00 0.00 0.00 0.00 0.00 00:31:49.780 00:31:50.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.723 Nvme0n1 : 10.00 18051.70 70.51 0.00 0.00 0.00 0.00 0.00 00:31:50.723 [2024-11-06T10:13:42.145Z] =================================================================================================================== 00:31:50.723 [2024-11-06T10:13:42.145Z] Total : 18051.70 70.51 0.00 0.00 0.00 0.00 0.00 00:31:50.723 00:31:50.723 00:31:50.723 Latency(us) 00:31:50.723 [2024-11-06T10:13:42.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.723 Nvme0n1 : 10.00 18049.52 70.51 0.00 0.00 7088.46 2239.15 13161.81 00:31:50.723 [2024-11-06T10:13:42.145Z] =================================================================================================================== 00:31:50.723 [2024-11-06T10:13:42.145Z] Total : 18049.52 70.51 0.00 0.00 7088.46 2239.15 13161.81 00:31:50.723 { 00:31:50.723 "results": [ 00:31:50.723 { 00:31:50.723 "job": "Nvme0n1", 00:31:50.723 "core_mask": "0x2", 00:31:50.723 "workload": "randwrite", 00:31:50.723 "status": "finished", 00:31:50.723 "queue_depth": 128, 00:31:50.723 "io_size": 4096, 00:31:50.723 "runtime": 10.004808, 00:31:50.723 "iops": 18049.521789923405, 00:31:50.723 "mibps": 70.5059444918883, 00:31:50.723 "io_failed": 0, 00:31:50.723 "io_timeout": 0, 00:31:50.723 "avg_latency_us": 7088.460233984192, 00:31:50.724 "min_latency_us": 2239.1466666666665, 00:31:50.724 "max_latency_us": 13161.813333333334 00:31:50.724 } 00:31:50.724 ], 00:31:50.724 "core_count": 1 00:31:50.724 } 00:31:50.724 11:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3485644 00:31:50.724 11:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3485644 ']' 00:31:50.724 11:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3485644 00:31:50.724 11:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:31:50.724 11:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:50.724 11:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3485644 00:31:50.724 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:50.724 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:50.724 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3485644' 00:31:50.724 killing process with pid 3485644 00:31:50.724 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3485644 00:31:50.724 Received shutdown signal, test time was about 10.000000 seconds 00:31:50.724 00:31:50.724 Latency(us) 00:31:50.724 [2024-11-06T10:13:42.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.724 [2024-11-06T10:13:42.146Z] =================================================================================================================== 00:31:50.724 [2024-11-06T10:13:42.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.724 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3485644 00:31:50.724 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:50.985 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:51.244 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:51.244 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:51.244 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:51.244 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:51.244 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:51.506 [2024-11-06 11:13:42.778401] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:51.506 11:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:51.766 request: 00:31:51.766 { 00:31:51.766 "uuid": "75385606-ea72-4cad-a563-1da652970fe2", 00:31:51.766 "method": "bdev_lvol_get_lvstores", 00:31:51.766 "req_id": 1 00:31:51.766 } 00:31:51.766 Got JSON-RPC error response 00:31:51.766 response: 00:31:51.766 { 00:31:51.766 "code": -19, 00:31:51.766 "message": "No such device" 00:31:51.766 } 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:51.767 aio_bdev 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7d09811-a588-4f1c-a751-fe5169f7d0f3 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=e7d09811-a588-4f1c-a751-fe5169f7d0f3 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:51.767 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:52.027 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7d09811-a588-4f1c-a751-fe5169f7d0f3 -t 2000 00:31:52.290 [ 00:31:52.290 { 00:31:52.290 "name": "e7d09811-a588-4f1c-a751-fe5169f7d0f3", 00:31:52.290 "aliases": [ 00:31:52.290 "lvs/lvol" 00:31:52.290 ], 00:31:52.290 "product_name": "Logical Volume", 00:31:52.290 "block_size": 4096, 00:31:52.290 "num_blocks": 38912, 00:31:52.290 "uuid": "e7d09811-a588-4f1c-a751-fe5169f7d0f3", 00:31:52.290 "assigned_rate_limits": { 00:31:52.290 "rw_ios_per_sec": 0, 00:31:52.290 "rw_mbytes_per_sec": 0, 00:31:52.290 "r_mbytes_per_sec": 0, 00:31:52.290 "w_mbytes_per_sec": 0 00:31:52.290 }, 00:31:52.290 "claimed": false, 00:31:52.290 "zoned": false, 00:31:52.290 "supported_io_types": { 00:31:52.290 "read": true, 00:31:52.290 "write": true, 00:31:52.290 "unmap": true, 00:31:52.290 "flush": false, 00:31:52.290 "reset": true, 00:31:52.290 "nvme_admin": false, 00:31:52.290 "nvme_io": false, 00:31:52.290 "nvme_io_md": false, 00:31:52.290 "write_zeroes": true, 00:31:52.290 "zcopy": false, 00:31:52.290 "get_zone_info": false, 00:31:52.290 "zone_management": false, 00:31:52.290 "zone_append": false, 00:31:52.290 "compare": false, 00:31:52.290 "compare_and_write": false, 00:31:52.290 "abort": false, 00:31:52.290 "seek_hole": true, 00:31:52.290 "seek_data": true, 00:31:52.290 "copy": false, 00:31:52.290 "nvme_iov_md": false 00:31:52.290 }, 00:31:52.290 "driver_specific": { 00:31:52.290 "lvol": { 00:31:52.290 "lvol_store_uuid": "75385606-ea72-4cad-a563-1da652970fe2", 00:31:52.290 "base_bdev": "aio_bdev", 00:31:52.290 "thin_provision": false, 00:31:52.290 "num_allocated_clusters": 38, 00:31:52.290 "snapshot": false, 00:31:52.290 "clone": false, 00:31:52.290 "esnap_clone": false 00:31:52.290 } 00:31:52.290 } 00:31:52.290 } 00:31:52.290 ] 00:31:52.290 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:31:52.290 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:52.290 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:52.290 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:52.290 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:52.290 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:52.552 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:52.552 11:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7d09811-a588-4f1c-a751-fe5169f7d0f3 00:31:52.812 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 75385606-ea72-4cad-a563-1da652970fe2 00:31:52.812 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:53.071 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.071 00:31:53.072 real 0m15.738s 00:31:53.072 user 0m15.456s 00:31:53.072 sys 0m1.326s 00:31:53.072 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:53.072 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:53.072 ************************************ 00:31:53.072 END TEST lvs_grow_clean 00:31:53.072 ************************************ 00:31:53.072 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:53.072 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:53.072 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:53.072 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:53.332 ************************************ 00:31:53.332 START TEST lvs_grow_dirty 00:31:53.332 ************************************ 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:53.332 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:53.333 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:53.593 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=76152743-598a-4d38-8e6d-e4e16f06a0f8 00:31:53.593 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:31:53.593 11:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:53.853 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:53.853 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:53.853 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 lvol 150 00:31:53.853 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=976d4491-b9d4-4a6b-af69-a71f0b3e918c 00:31:53.853 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.853 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:54.113 [2024-11-06 11:13:45.394218] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:54.113 [2024-11-06 11:13:45.394288] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:54.113 true 00:31:54.113 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:31:54.113 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:54.373 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:54.373 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:54.373 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 976d4491-b9d4-4a6b-af69-a71f0b3e918c 00:31:54.634 11:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.895 [2024-11-06 11:13:46.070892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3488679 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3488679 /var/tmp/bdevperf.sock 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3488679 ']' 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:54.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:54.895 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:54.895 [2024-11-06 11:13:46.286206] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:31:54.895 [2024-11-06 11:13:46.286258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488679 ] 00:31:55.156 [2024-11-06 11:13:46.371226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.156 [2024-11-06 11:13:46.401050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.156 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:55.156 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:31:55.156 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:55.417 Nvme0n1 00:31:55.417 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:55.678 [ 00:31:55.678 { 00:31:55.678 "name": "Nvme0n1", 00:31:55.678 "aliases": [ 00:31:55.678 "976d4491-b9d4-4a6b-af69-a71f0b3e918c" 00:31:55.678 ], 00:31:55.678 "product_name": "NVMe disk", 00:31:55.678 "block_size": 4096, 00:31:55.678 "num_blocks": 38912, 00:31:55.678 "uuid": "976d4491-b9d4-4a6b-af69-a71f0b3e918c", 00:31:55.678 "numa_id": 0, 00:31:55.678 "assigned_rate_limits": { 00:31:55.678 "rw_ios_per_sec": 0, 00:31:55.678 "rw_mbytes_per_sec": 0, 00:31:55.678 "r_mbytes_per_sec": 0, 00:31:55.678 "w_mbytes_per_sec": 0 00:31:55.678 }, 00:31:55.678 "claimed": false, 00:31:55.678 "zoned": false, 00:31:55.678 "supported_io_types": { 00:31:55.678 "read": true, 00:31:55.678 "write": true, 00:31:55.678 "unmap": true, 00:31:55.678 "flush": true, 00:31:55.678 "reset": true, 00:31:55.678 "nvme_admin": true, 00:31:55.678 "nvme_io": true, 00:31:55.678 "nvme_io_md": false, 00:31:55.678 "write_zeroes": true, 00:31:55.678 "zcopy": false, 00:31:55.678 "get_zone_info": false, 00:31:55.678 "zone_management": false, 00:31:55.678 "zone_append": false, 00:31:55.678 "compare": true, 00:31:55.678 "compare_and_write": true, 00:31:55.678 "abort": true, 00:31:55.678 "seek_hole": false, 00:31:55.678 "seek_data": false, 00:31:55.678 "copy": true, 00:31:55.678 "nvme_iov_md": false 00:31:55.678 }, 00:31:55.678 "memory_domains": [ 00:31:55.678 { 00:31:55.678 "dma_device_id": "system", 00:31:55.679 "dma_device_type": 1 00:31:55.679 } 00:31:55.679 ], 00:31:55.679 "driver_specific": { 00:31:55.679 "nvme": [ 00:31:55.679 { 00:31:55.679 "trid": { 00:31:55.679 "trtype": "TCP", 00:31:55.679 "adrfam": "IPv4", 00:31:55.679 "traddr": "10.0.0.2", 00:31:55.679 "trsvcid": "4420", 00:31:55.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:55.679 }, 00:31:55.679 "ctrlr_data": { 00:31:55.679 "cntlid": 1, 00:31:55.679 "vendor_id": "0x8086", 00:31:55.679 "model_number": "SPDK bdev Controller", 00:31:55.679 "serial_number": "SPDK0", 00:31:55.679 "firmware_revision": "25.01", 00:31:55.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.679 "oacs": { 00:31:55.679 "security": 0, 00:31:55.679 "format": 0, 00:31:55.679 "firmware": 0, 00:31:55.679 "ns_manage": 0 00:31:55.679 }, 00:31:55.679 "multi_ctrlr": true, 00:31:55.679 "ana_reporting": false 00:31:55.679 }, 00:31:55.679 "vs": { 00:31:55.679 "nvme_version": "1.3" 00:31:55.679 }, 00:31:55.679 "ns_data": { 00:31:55.679 "id": 1, 00:31:55.679 "can_share": true 00:31:55.679 } 00:31:55.679 } 00:31:55.679 ], 00:31:55.679 "mp_policy": "active_passive" 00:31:55.679 } 00:31:55.679 } 00:31:55.679 ] 00:31:55.679 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3488923 00:31:55.679 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:55.679 11:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:55.679 Running I/O for 10 seconds... 00:31:56.619 Latency(us) 00:31:56.619 [2024-11-06T10:13:48.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.619 Nvme0n1 : 1.00 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:31:56.619 [2024-11-06T10:13:48.041Z] =================================================================================================================== 00:31:56.619 [2024-11-06T10:13:48.041Z] Total : 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:31:56.619 00:31:57.559 11:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:31:57.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.819 Nvme0n1 : 2.00 17908.50 69.96 0.00 0.00 0.00 0.00 0.00 00:31:57.819 [2024-11-06T10:13:49.241Z] =================================================================================================================== 00:31:57.819 [2024-11-06T10:13:49.241Z] Total : 17908.50 69.96 0.00 0.00 0.00 0.00 0.00 00:31:57.819 00:31:57.819 true 00:31:57.819 11:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:31:57.819 11:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:58.079 11:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:58.079 11:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:58.079 11:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3488923 00:31:58.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.648 Nvme0n1 : 3.00 17929.33 70.04 0.00 0.00 0.00 0.00 0.00 00:31:58.648 [2024-11-06T10:13:50.070Z] =================================================================================================================== 00:31:58.648 [2024-11-06T10:13:50.070Z] Total : 17929.33 70.04 0.00 0.00 0.00 0.00 0.00 00:31:58.648 00:32:00.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.029 Nvme0n1 : 4.00 17971.25 70.20 0.00 0.00 0.00 0.00 0.00 00:32:00.029 [2024-11-06T10:13:51.451Z] =================================================================================================================== 00:32:00.029 [2024-11-06T10:13:51.451Z] Total : 17971.25 70.20 0.00 0.00 0.00 0.00 0.00 00:32:00.029 00:32:00.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.599 Nvme0n1 : 5.00 17983.80 70.25 0.00 0.00 0.00 0.00 0.00 00:32:00.599 [2024-11-06T10:13:52.021Z] =================================================================================================================== 00:32:00.599 [2024-11-06T10:13:52.021Z] Total : 17983.80 70.25 0.00 0.00 0.00 0.00 0.00 00:32:00.599 00:32:01.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.981 Nvme0n1 : 6.00 18013.33 70.36 0.00 0.00 0.00 0.00 0.00 00:32:01.981 [2024-11-06T10:13:53.403Z] =================================================================================================================== 00:32:01.981 [2024-11-06T10:13:53.403Z] Total : 18013.33 70.36 0.00 0.00 0.00 0.00 0.00 00:32:01.981 00:32:02.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.923 Nvme0n1 : 7.00 18034.43 70.45 0.00 0.00 0.00 0.00 0.00 00:32:02.923 [2024-11-06T10:13:54.345Z] =================================================================================================================== 00:32:02.923 [2024-11-06T10:13:54.345Z] Total : 18034.43 70.45 0.00 0.00 0.00 0.00 0.00 00:32:02.923 00:32:03.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.863 Nvme0n1 : 8.00 18050.25 70.51 0.00 0.00 0.00 0.00 0.00 00:32:03.863 [2024-11-06T10:13:55.285Z] =================================================================================================================== 00:32:03.863 [2024-11-06T10:13:55.285Z] Total : 18050.25 70.51 0.00 0.00 0.00 0.00 0.00 00:32:03.863 00:32:04.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.804 Nvme0n1 : 9.00 18055.56 70.53 0.00 0.00 0.00 0.00 0.00 00:32:04.804 [2024-11-06T10:13:56.226Z] =================================================================================================================== 00:32:04.804 [2024-11-06T10:13:56.226Z] Total : 18055.56 70.53 0.00 0.00 0.00 0.00 0.00 00:32:04.804 00:32:05.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.745 Nvme0n1 : 10.00 18066.10 70.57 0.00 0.00 0.00 0.00 0.00 00:32:05.745 [2024-11-06T10:13:57.167Z] =================================================================================================================== 00:32:05.745 [2024-11-06T10:13:57.167Z] Total : 18066.10 70.57 0.00 0.00 0.00 0.00 0.00 00:32:05.745 00:32:05.745 00:32:05.745 Latency(us) 00:32:05.745 [2024-11-06T10:13:57.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.745 Nvme0n1 : 10.00 18066.80 70.57 0.00 0.00 7083.33 1652.05 13052.59 00:32:05.745 [2024-11-06T10:13:57.167Z] =================================================================================================================== 00:32:05.745 [2024-11-06T10:13:57.167Z] Total : 18066.80 70.57 0.00 0.00 7083.33 1652.05 13052.59 00:32:05.745 { 00:32:05.745 "results": [ 00:32:05.745 { 00:32:05.745 "job": "Nvme0n1", 00:32:05.745 "core_mask": "0x2", 00:32:05.745 "workload": "randwrite", 00:32:05.745 "status": "finished", 00:32:05.745 "queue_depth": 128, 00:32:05.745 "io_size": 4096, 00:32:05.745 "runtime": 10.003157, 00:32:05.745 "iops": 18066.796312404174, 00:32:05.745 "mibps": 70.5734230953288, 00:32:05.745 "io_failed": 0, 00:32:05.745 "io_timeout": 0, 00:32:05.745 "avg_latency_us": 7083.333260589293, 00:32:05.745 "min_latency_us": 1652.0533333333333, 00:32:05.745 "max_latency_us": 13052.586666666666 00:32:05.745 } 00:32:05.745 ], 00:32:05.745 "core_count": 1 00:32:05.745 } 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3488679 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3488679 ']' 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3488679 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3488679 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3488679' 00:32:05.745 killing process with pid 3488679 00:32:05.745 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3488679 00:32:05.745 Received shutdown signal, test time was about 10.000000 seconds 00:32:05.745 00:32:05.745 Latency(us) 00:32:05.746 [2024-11-06T10:13:57.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.746 [2024-11-06T10:13:57.168Z] =================================================================================================================== 00:32:05.746 [2024-11-06T10:13:57.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.746 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3488679 00:32:06.007 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:06.007 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:06.269 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:06.269 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:06.530 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3485211 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3485211 00:32:06.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3485211 Killed "${NVMF_APP[@]}" "$@" 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3491022 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3491022 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3491022 ']' 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:06.531 11:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:06.531 [2024-11-06 11:13:57.878363] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.531 [2024-11-06 11:13:57.879505] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:06.531 [2024-11-06 11:13:57.879555] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.791 [2024-11-06 11:13:57.957875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.791 [2024-11-06 11:13:57.993668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.791 [2024-11-06 11:13:57.993702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.791 [2024-11-06 11:13:57.993712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.791 [2024-11-06 11:13:57.993720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.791 [2024-11-06 11:13:57.993727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.791 [2024-11-06 11:13:57.994252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.791 [2024-11-06 11:13:58.048320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.791 [2024-11-06 11:13:58.048570] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.364 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:07.364 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:07.364 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.364 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:07.364 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:07.364 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.364 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:07.625 [2024-11-06 11:13:58.845556] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:07.625 [2024-11-06 11:13:58.845687] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:07.625 [2024-11-06 11:13:58.845720] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 976d4491-b9d4-4a6b-af69-a71f0b3e918c 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=976d4491-b9d4-4a6b-af69-a71f0b3e918c 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:07.625 11:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:07.886 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 976d4491-b9d4-4a6b-af69-a71f0b3e918c -t 2000 00:32:07.886 [ 00:32:07.886 { 00:32:07.886 "name": "976d4491-b9d4-4a6b-af69-a71f0b3e918c", 00:32:07.886 "aliases": [ 00:32:07.886 "lvs/lvol" 00:32:07.886 ], 00:32:07.886 "product_name": "Logical Volume", 00:32:07.886 "block_size": 4096, 00:32:07.886 "num_blocks": 38912, 00:32:07.886 "uuid": "976d4491-b9d4-4a6b-af69-a71f0b3e918c", 00:32:07.886 "assigned_rate_limits": { 00:32:07.886 "rw_ios_per_sec": 0, 00:32:07.886 "rw_mbytes_per_sec": 0, 00:32:07.886 "r_mbytes_per_sec": 0, 00:32:07.886 "w_mbytes_per_sec": 0 00:32:07.886 }, 00:32:07.886 "claimed": false, 00:32:07.886 "zoned": false, 00:32:07.886 "supported_io_types": { 00:32:07.886 "read": true, 00:32:07.886 "write": true, 00:32:07.886 "unmap": true, 00:32:07.886 "flush": false, 00:32:07.886 "reset": true, 00:32:07.886 "nvme_admin": false, 00:32:07.886 "nvme_io": false, 00:32:07.886 "nvme_io_md": false, 00:32:07.886 "write_zeroes": true, 00:32:07.886 "zcopy": false, 00:32:07.886 "get_zone_info": false, 00:32:07.886 "zone_management": false, 00:32:07.886 "zone_append": false, 00:32:07.886 "compare": false, 00:32:07.886 "compare_and_write": false, 00:32:07.886 "abort": false, 00:32:07.886 "seek_hole": true, 00:32:07.886 "seek_data": true, 00:32:07.886 "copy": false, 00:32:07.886 "nvme_iov_md": false 00:32:07.886 }, 00:32:07.886 "driver_specific": { 00:32:07.886 "lvol": { 00:32:07.886 "lvol_store_uuid": "76152743-598a-4d38-8e6d-e4e16f06a0f8", 00:32:07.886 "base_bdev": "aio_bdev", 00:32:07.886 "thin_provision": false, 00:32:07.886 "num_allocated_clusters": 38, 00:32:07.886 "snapshot": false, 00:32:07.886 "clone": false, 00:32:07.886 "esnap_clone": false 00:32:07.886 } 00:32:07.886 } 00:32:07.886 } 00:32:07.886 ] 00:32:07.886 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:07.887 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:07.887 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:08.148 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:08.148 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:08.148 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:08.410 [2024-11-06 11:13:59.742676] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:08.410 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:08.672 request: 00:32:08.672 { 00:32:08.672 "uuid": "76152743-598a-4d38-8e6d-e4e16f06a0f8", 00:32:08.672 "method": "bdev_lvol_get_lvstores", 00:32:08.672 "req_id": 1 00:32:08.672 } 00:32:08.672 Got JSON-RPC error response 00:32:08.672 response: 00:32:08.672 { 00:32:08.672 "code": -19, 00:32:08.672 "message": "No such device" 00:32:08.672 } 00:32:08.672 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:08.672 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:08.672 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:08.672 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:08.672 11:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:08.933 aio_bdev 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 976d4491-b9d4-4a6b-af69-a71f0b3e918c 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=976d4491-b9d4-4a6b-af69-a71f0b3e918c 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:08.933 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 976d4491-b9d4-4a6b-af69-a71f0b3e918c -t 2000 00:32:09.194 [ 00:32:09.194 { 00:32:09.194 "name": "976d4491-b9d4-4a6b-af69-a71f0b3e918c", 00:32:09.194 "aliases": [ 00:32:09.194 "lvs/lvol" 00:32:09.194 ], 00:32:09.194 "product_name": "Logical Volume", 00:32:09.194 "block_size": 4096, 00:32:09.194 "num_blocks": 38912, 00:32:09.194 "uuid": "976d4491-b9d4-4a6b-af69-a71f0b3e918c", 00:32:09.194 "assigned_rate_limits": { 00:32:09.194 "rw_ios_per_sec": 0, 00:32:09.194 "rw_mbytes_per_sec": 0, 00:32:09.194 "r_mbytes_per_sec": 0, 00:32:09.194 "w_mbytes_per_sec": 0 00:32:09.194 }, 00:32:09.194 "claimed": false, 00:32:09.194 "zoned": false, 00:32:09.194 "supported_io_types": { 00:32:09.194 "read": true, 00:32:09.194 "write": true, 00:32:09.194 "unmap": true, 00:32:09.194 "flush": false, 00:32:09.194 "reset": true, 00:32:09.194 "nvme_admin": false, 00:32:09.194 "nvme_io": false, 00:32:09.194 "nvme_io_md": false, 00:32:09.194 "write_zeroes": true, 00:32:09.194 "zcopy": false, 00:32:09.194 "get_zone_info": false, 00:32:09.194 "zone_management": false, 00:32:09.194 "zone_append": false, 00:32:09.194 "compare": false, 00:32:09.194 "compare_and_write": false, 00:32:09.194 "abort": false, 00:32:09.194 "seek_hole": true, 00:32:09.194 "seek_data": true, 00:32:09.194 "copy": false, 00:32:09.194 "nvme_iov_md": false 00:32:09.194 }, 00:32:09.194 "driver_specific": { 00:32:09.194 "lvol": { 00:32:09.194 "lvol_store_uuid": "76152743-598a-4d38-8e6d-e4e16f06a0f8", 00:32:09.194 "base_bdev": "aio_bdev", 00:32:09.194 "thin_provision": false, 00:32:09.194 "num_allocated_clusters": 38, 00:32:09.194 "snapshot": false, 00:32:09.194 "clone": false, 00:32:09.194 "esnap_clone": false 00:32:09.194 } 00:32:09.194 } 00:32:09.194 } 00:32:09.194 ] 00:32:09.194 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:09.194 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:09.194 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:09.456 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:09.456 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:09.456 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:09.456 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:09.456 11:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 976d4491-b9d4-4a6b-af69-a71f0b3e918c 00:32:09.717 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76152743-598a-4d38-8e6d-e4e16f06a0f8 00:32:09.978 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:10.240 00:32:10.240 real 0m16.938s 00:32:10.240 user 0m34.756s 00:32:10.240 sys 0m2.869s 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:10.240 ************************************ 00:32:10.240 END TEST lvs_grow_dirty 00:32:10.240 ************************************ 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:10.240 nvmf_trace.0 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:10.240 rmmod nvme_tcp 00:32:10.240 rmmod nvme_fabrics 00:32:10.240 rmmod nvme_keyring 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3491022 ']' 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3491022 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3491022 ']' 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3491022 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:10.240 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3491022 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3491022' 00:32:10.500 killing process with pid 3491022 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3491022 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3491022 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.500 11:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:13.045 00:32:13.045 real 0m43.403s 00:32:13.045 user 0m52.928s 00:32:13.045 sys 0m9.893s 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.045 ************************************ 00:32:13.045 END TEST nvmf_lvs_grow 00:32:13.045 ************************************ 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:13.045 ************************************ 00:32:13.045 START TEST nvmf_bdev_io_wait 00:32:13.045 ************************************ 00:32:13.045 11:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:13.045 * Looking for test storage... 00:32:13.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.045 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:13.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.046 --rc genhtml_branch_coverage=1 00:32:13.046 --rc genhtml_function_coverage=1 00:32:13.046 --rc genhtml_legend=1 00:32:13.046 --rc geninfo_all_blocks=1 00:32:13.046 --rc geninfo_unexecuted_blocks=1 00:32:13.046 00:32:13.046 ' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:13.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.046 --rc genhtml_branch_coverage=1 00:32:13.046 --rc genhtml_function_coverage=1 00:32:13.046 --rc genhtml_legend=1 00:32:13.046 --rc geninfo_all_blocks=1 00:32:13.046 --rc geninfo_unexecuted_blocks=1 00:32:13.046 00:32:13.046 ' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:13.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.046 --rc genhtml_branch_coverage=1 00:32:13.046 --rc genhtml_function_coverage=1 00:32:13.046 --rc genhtml_legend=1 00:32:13.046 --rc geninfo_all_blocks=1 00:32:13.046 --rc geninfo_unexecuted_blocks=1 00:32:13.046 00:32:13.046 ' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:13.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.046 --rc genhtml_branch_coverage=1 00:32:13.046 --rc genhtml_function_coverage=1 00:32:13.046 --rc genhtml_legend=1 00:32:13.046 --rc geninfo_all_blocks=1 00:32:13.046 --rc geninfo_unexecuted_blocks=1 00:32:13.046 00:32:13.046 ' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:13.046 11:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:19.635 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:19.635 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:19.635 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.635 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:19.635 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:19.636 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:19.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:32:19.896 00:32:19.896 --- 10.0.0.2 ping statistics --- 00:32:19.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.896 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:32:19.896 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:32:20.157 00:32:20.157 --- 10.0.0.1 ping statistics --- 00:32:20.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.157 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3495755 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3495755 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3495755 ']' 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:20.157 11:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:20.157 [2024-11-06 11:14:11.428288] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:20.157 [2024-11-06 11:14:11.429365] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:20.157 [2024-11-06 11:14:11.429412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.157 [2024-11-06 11:14:11.512156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:20.157 [2024-11-06 11:14:11.555094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.157 [2024-11-06 11:14:11.555133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.157 [2024-11-06 11:14:11.555141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.157 [2024-11-06 11:14:11.555148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.157 [2024-11-06 11:14:11.555154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.157 [2024-11-06 11:14:11.556976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.157 [2024-11-06 11:14:11.557092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.157 [2024-11-06 11:14:11.557249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.157 [2024-11-06 11:14:11.557249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.157 [2024-11-06 11:14:11.557520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.098 [2024-11-06 11:14:12.331791] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:21.098 [2024-11-06 11:14:12.332215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:21.098 [2024-11-06 11:14:12.332913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:21.098 [2024-11-06 11:14:12.333050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.098 [2024-11-06 11:14:12.341720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.098 Malloc0 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.098 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.099 [2024-11-06 11:14:12.405892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3496088 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3496091 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:21.099 { 00:32:21.099 "params": { 00:32:21.099 "name": "Nvme$subsystem", 00:32:21.099 "trtype": "$TEST_TRANSPORT", 00:32:21.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.099 "adrfam": "ipv4", 00:32:21.099 "trsvcid": "$NVMF_PORT", 00:32:21.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.099 "hdgst": ${hdgst:-false}, 00:32:21.099 "ddgst": ${ddgst:-false} 00:32:21.099 }, 00:32:21.099 "method": "bdev_nvme_attach_controller" 00:32:21.099 } 00:32:21.099 EOF 00:32:21.099 )") 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3496093 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:21.099 { 00:32:21.099 "params": { 00:32:21.099 "name": "Nvme$subsystem", 00:32:21.099 "trtype": "$TEST_TRANSPORT", 00:32:21.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.099 "adrfam": "ipv4", 00:32:21.099 "trsvcid": "$NVMF_PORT", 00:32:21.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.099 "hdgst": ${hdgst:-false}, 00:32:21.099 "ddgst": ${ddgst:-false} 00:32:21.099 }, 00:32:21.099 "method": "bdev_nvme_attach_controller" 00:32:21.099 } 00:32:21.099 EOF 00:32:21.099 )") 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3496097 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:21.099 { 00:32:21.099 "params": { 00:32:21.099 "name": "Nvme$subsystem", 00:32:21.099 "trtype": "$TEST_TRANSPORT", 00:32:21.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.099 "adrfam": "ipv4", 00:32:21.099 "trsvcid": "$NVMF_PORT", 00:32:21.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.099 "hdgst": ${hdgst:-false}, 00:32:21.099 "ddgst": ${ddgst:-false} 00:32:21.099 }, 00:32:21.099 "method": "bdev_nvme_attach_controller" 00:32:21.099 } 00:32:21.099 EOF 00:32:21.099 )") 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:21.099 { 00:32:21.099 "params": { 00:32:21.099 "name": "Nvme$subsystem", 00:32:21.099 "trtype": "$TEST_TRANSPORT", 00:32:21.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.099 "adrfam": "ipv4", 00:32:21.099 "trsvcid": "$NVMF_PORT", 00:32:21.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.099 "hdgst": ${hdgst:-false}, 00:32:21.099 "ddgst": ${ddgst:-false} 00:32:21.099 }, 00:32:21.099 "method": "bdev_nvme_attach_controller" 00:32:21.099 } 00:32:21.099 EOF 00:32:21.099 )") 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3496088 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:21.099 "params": { 00:32:21.099 "name": "Nvme1", 00:32:21.099 "trtype": "tcp", 00:32:21.099 "traddr": "10.0.0.2", 00:32:21.099 "adrfam": "ipv4", 00:32:21.099 "trsvcid": "4420", 00:32:21.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.099 "hdgst": false, 00:32:21.099 "ddgst": false 00:32:21.099 }, 00:32:21.099 "method": "bdev_nvme_attach_controller" 00:32:21.099 }' 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:21.099 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:21.099 "params": { 00:32:21.100 "name": "Nvme1", 00:32:21.100 "trtype": "tcp", 00:32:21.100 "traddr": "10.0.0.2", 00:32:21.100 "adrfam": "ipv4", 00:32:21.100 "trsvcid": "4420", 00:32:21.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.100 "hdgst": false, 00:32:21.100 "ddgst": false 00:32:21.100 }, 00:32:21.100 "method": "bdev_nvme_attach_controller" 00:32:21.100 }' 00:32:21.100 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:21.100 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:21.100 "params": { 00:32:21.100 "name": "Nvme1", 00:32:21.100 "trtype": "tcp", 00:32:21.100 "traddr": "10.0.0.2", 00:32:21.100 "adrfam": "ipv4", 00:32:21.100 "trsvcid": "4420", 00:32:21.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.100 "hdgst": false, 00:32:21.100 "ddgst": false 00:32:21.100 }, 00:32:21.100 "method": "bdev_nvme_attach_controller" 00:32:21.100 }' 00:32:21.100 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:21.100 11:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:21.100 "params": { 00:32:21.100 "name": "Nvme1", 00:32:21.100 "trtype": "tcp", 00:32:21.100 "traddr": "10.0.0.2", 00:32:21.100 "adrfam": "ipv4", 00:32:21.100 "trsvcid": "4420", 00:32:21.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.100 "hdgst": false, 00:32:21.100 "ddgst": false 00:32:21.100 }, 00:32:21.100 "method": "bdev_nvme_attach_controller" 00:32:21.100 }' 00:32:21.100 [2024-11-06 11:14:12.460264] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:21.100 [2024-11-06 11:14:12.460317] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:21.100 [2024-11-06 11:14:12.463009] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:21.100 [2024-11-06 11:14:12.463032] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:21.100 [2024-11-06 11:14:12.463054] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:21.100 [2024-11-06 11:14:12.463080] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:21.100 [2024-11-06 11:14:12.463647] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:21.100 [2024-11-06 11:14:12.463691] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:21.360 [2024-11-06 11:14:12.614627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.360 [2024-11-06 11:14:12.644437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:21.360 [2024-11-06 11:14:12.672869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.360 [2024-11-06 11:14:12.697539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.360 [2024-11-06 11:14:12.702562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:21.360 [2024-11-06 11:14:12.726060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:21.360 [2024-11-06 11:14:12.757242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.620 [2024-11-06 11:14:12.785111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:21.620 Running I/O for 1 seconds... 00:32:21.620 Running I/O for 1 seconds... 00:32:21.620 Running I/O for 1 seconds... 00:32:21.620 Running I/O for 1 seconds... 00:32:22.561 12407.00 IOPS, 48.46 MiB/s 00:32:22.561 Latency(us) 00:32:22.561 [2024-11-06T10:14:13.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.561 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:22.561 Nvme1n1 : 1.01 12454.11 48.65 0.00 0.00 10241.56 4833.28 12451.84 00:32:22.561 [2024-11-06T10:14:13.983Z] =================================================================================================================== 00:32:22.561 [2024-11-06T10:14:13.983Z] Total : 12454.11 48.65 0.00 0.00 10241.56 4833.28 12451.84 00:32:22.561 11446.00 IOPS, 44.71 MiB/s 00:32:22.561 Latency(us) 00:32:22.561 [2024-11-06T10:14:13.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.561 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:22.561 Nvme1n1 : 1.01 11524.52 45.02 0.00 0.00 11069.78 2007.04 14854.83 00:32:22.561 [2024-11-06T10:14:13.983Z] =================================================================================================================== 00:32:22.561 [2024-11-06T10:14:13.983Z] Total : 11524.52 45.02 0.00 0.00 11069.78 2007.04 14854.83 00:32:22.561 19266.00 IOPS, 75.26 MiB/s 00:32:22.561 Latency(us) 00:32:22.561 [2024-11-06T10:14:13.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.561 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:22.561 Nvme1n1 : 1.01 19354.12 75.60 0.00 0.00 6599.93 2075.31 11086.51 00:32:22.561 [2024-11-06T10:14:13.983Z] =================================================================================================================== 00:32:22.561 [2024-11-06T10:14:13.983Z] Total : 19354.12 75.60 0.00 0.00 6599.93 2075.31 11086.51 00:32:22.821 11:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3496091 00:32:22.822 11:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3496093 00:32:22.822 188448.00 IOPS, 736.12 MiB/s 00:32:22.822 Latency(us) 00:32:22.822 [2024-11-06T10:14:14.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.822 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:22.822 Nvme1n1 : 1.00 188074.40 734.67 0.00 0.00 676.96 300.37 1966.08 00:32:22.822 [2024-11-06T10:14:14.244Z] =================================================================================================================== 00:32:22.822 [2024-11-06T10:14:14.244Z] Total : 188074.40 734.67 0.00 0.00 676.96 300.37 1966.08 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3496097 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.822 rmmod nvme_tcp 00:32:22.822 rmmod nvme_fabrics 00:32:22.822 rmmod nvme_keyring 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3495755 ']' 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3495755 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3495755 ']' 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3495755 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:22.822 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3495755 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3495755' 00:32:23.082 killing process with pid 3495755 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3495755 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3495755 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.082 11:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.627 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.627 00:32:25.627 real 0m12.515s 00:32:25.627 user 0m14.795s 00:32:25.627 sys 0m7.154s 00:32:25.627 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:25.627 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:25.628 ************************************ 00:32:25.628 END TEST nvmf_bdev_io_wait 00:32:25.628 ************************************ 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.628 ************************************ 00:32:25.628 START TEST nvmf_queue_depth 00:32:25.628 ************************************ 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:25.628 * Looking for test storage... 00:32:25.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.628 --rc genhtml_branch_coverage=1 00:32:25.628 --rc genhtml_function_coverage=1 00:32:25.628 --rc genhtml_legend=1 00:32:25.628 --rc geninfo_all_blocks=1 00:32:25.628 --rc geninfo_unexecuted_blocks=1 00:32:25.628 00:32:25.628 ' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.628 --rc genhtml_branch_coverage=1 00:32:25.628 --rc genhtml_function_coverage=1 00:32:25.628 --rc genhtml_legend=1 00:32:25.628 --rc geninfo_all_blocks=1 00:32:25.628 --rc geninfo_unexecuted_blocks=1 00:32:25.628 00:32:25.628 ' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.628 --rc genhtml_branch_coverage=1 00:32:25.628 --rc genhtml_function_coverage=1 00:32:25.628 --rc genhtml_legend=1 00:32:25.628 --rc geninfo_all_blocks=1 00:32:25.628 --rc geninfo_unexecuted_blocks=1 00:32:25.628 00:32:25.628 ' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:25.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.628 --rc genhtml_branch_coverage=1 00:32:25.628 --rc genhtml_function_coverage=1 00:32:25.628 --rc genhtml_legend=1 00:32:25.628 --rc geninfo_all_blocks=1 00:32:25.628 --rc geninfo_unexecuted_blocks=1 00:32:25.628 00:32:25.628 ' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.628 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.629 11:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:32.276 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:32.276 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:32.276 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:32.276 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.276 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.277 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:32:32.538 00:32:32.538 --- 10.0.0.2 ping statistics --- 00:32:32.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.538 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:32:32.538 00:32:32.538 --- 10.0.0.1 ping statistics --- 00:32:32.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.538 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.538 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3500471 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3500471 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3500471 ']' 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:32.799 11:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:32.799 [2024-11-06 11:14:24.047719] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.799 [2024-11-06 11:14:24.048874] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:32.799 [2024-11-06 11:14:24.048927] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.799 [2024-11-06 11:14:24.152501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.799 [2024-11-06 11:14:24.202935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.799 [2024-11-06 11:14:24.202986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.799 [2024-11-06 11:14:24.202996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.799 [2024-11-06 11:14:24.203003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.799 [2024-11-06 11:14:24.203010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.799 [2024-11-06 11:14:24.203808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.059 [2024-11-06 11:14:24.279724] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:33.059 [2024-11-06 11:14:24.280023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:33.630 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:33.630 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:33.630 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.630 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.630 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.630 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.631 [2024-11-06 11:14:24.900673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.631 Malloc0 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.631 [2024-11-06 11:14:24.976774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3500814 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3500814 /var/tmp/bdevperf.sock 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3500814 ']' 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.631 11:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.631 [2024-11-06 11:14:25.033797] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:32:33.631 [2024-11-06 11:14:25.033853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500814 ] 00:32:33.891 [2024-11-06 11:14:25.107104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.891 [2024-11-06 11:14:25.148205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.460 11:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:34.460 11:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:34.460 11:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:34.460 11:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.460 11:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.720 NVMe0n1 00:32:34.720 11:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.720 11:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:34.720 Running I/O for 10 seconds... 00:32:37.042 9206.00 IOPS, 35.96 MiB/s [2024-11-06T10:14:29.404Z] 9220.00 IOPS, 36.02 MiB/s [2024-11-06T10:14:30.344Z] 9493.67 IOPS, 37.08 MiB/s [2024-11-06T10:14:31.283Z] 10073.50 IOPS, 39.35 MiB/s [2024-11-06T10:14:32.223Z] 10449.20 IOPS, 40.82 MiB/s [2024-11-06T10:14:33.163Z] 10751.33 IOPS, 42.00 MiB/s [2024-11-06T10:14:34.546Z] 10973.00 IOPS, 42.86 MiB/s [2024-11-06T10:14:35.488Z] 11142.38 IOPS, 43.52 MiB/s [2024-11-06T10:14:36.429Z] 11264.67 IOPS, 44.00 MiB/s [2024-11-06T10:14:36.429Z] 11368.50 IOPS, 44.41 MiB/s 00:32:45.007 Latency(us) 00:32:45.007 [2024-11-06T10:14:36.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.007 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:45.007 Verification LBA range: start 0x0 length 0x4000 00:32:45.007 NVMe0n1 : 10.06 11404.79 44.55 0.00 0.00 89489.29 22282.24 69468.16 00:32:45.007 [2024-11-06T10:14:36.429Z] =================================================================================================================== 00:32:45.007 [2024-11-06T10:14:36.429Z] Total : 11404.79 44.55 0.00 0.00 89489.29 22282.24 69468.16 00:32:45.007 { 00:32:45.007 "results": [ 00:32:45.007 { 00:32:45.007 "job": "NVMe0n1", 00:32:45.007 "core_mask": "0x1", 00:32:45.007 "workload": "verify", 00:32:45.007 "status": "finished", 00:32:45.007 "verify_range": { 00:32:45.007 "start": 0, 00:32:45.007 "length": 16384 00:32:45.007 }, 00:32:45.007 "queue_depth": 1024, 00:32:45.007 "io_size": 4096, 00:32:45.007 "runtime": 10.055425, 00:32:45.007 "iops": 11404.788957204693, 00:32:45.007 "mibps": 44.54995686408083, 00:32:45.007 "io_failed": 0, 00:32:45.007 "io_timeout": 0, 00:32:45.007 "avg_latency_us": 89489.29418253692, 00:32:45.007 "min_latency_us": 22282.24, 00:32:45.007 "max_latency_us": 69468.16 00:32:45.007 } 00:32:45.007 ], 00:32:45.007 "core_count": 1 00:32:45.007 } 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3500814 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3500814 ']' 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3500814 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3500814 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3500814' 00:32:45.007 killing process with pid 3500814 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3500814 00:32:45.007 Received shutdown signal, test time was about 10.000000 seconds 00:32:45.007 00:32:45.007 Latency(us) 00:32:45.007 [2024-11-06T10:14:36.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.007 [2024-11-06T10:14:36.429Z] =================================================================================================================== 00:32:45.007 [2024-11-06T10:14:36.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3500814 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:45.007 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:45.008 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:45.008 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:45.008 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:45.008 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:45.008 rmmod nvme_tcp 00:32:45.269 rmmod nvme_fabrics 00:32:45.269 rmmod nvme_keyring 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3500471 ']' 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3500471 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3500471 ']' 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3500471 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3500471 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3500471' 00:32:45.269 killing process with pid 3500471 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3500471 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3500471 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.269 11:14:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.818 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.818 00:32:47.818 real 0m22.193s 00:32:47.818 user 0m24.758s 00:32:47.818 sys 0m7.127s 00:32:47.818 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:47.818 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.818 ************************************ 00:32:47.818 END TEST nvmf_queue_depth 00:32:47.818 ************************************ 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:47.819 ************************************ 00:32:47.819 START TEST nvmf_target_multipath 00:32:47.819 ************************************ 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:47.819 * Looking for test storage... 00:32:47.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:47.819 11:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.819 --rc genhtml_branch_coverage=1 00:32:47.819 --rc genhtml_function_coverage=1 00:32:47.819 --rc genhtml_legend=1 00:32:47.819 --rc geninfo_all_blocks=1 00:32:47.819 --rc geninfo_unexecuted_blocks=1 00:32:47.819 00:32:47.819 ' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.819 --rc genhtml_branch_coverage=1 00:32:47.819 --rc genhtml_function_coverage=1 00:32:47.819 --rc genhtml_legend=1 00:32:47.819 --rc geninfo_all_blocks=1 00:32:47.819 --rc geninfo_unexecuted_blocks=1 00:32:47.819 00:32:47.819 ' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.819 --rc genhtml_branch_coverage=1 00:32:47.819 --rc genhtml_function_coverage=1 00:32:47.819 --rc genhtml_legend=1 00:32:47.819 --rc geninfo_all_blocks=1 00:32:47.819 --rc geninfo_unexecuted_blocks=1 00:32:47.819 00:32:47.819 ' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:47.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.819 --rc genhtml_branch_coverage=1 00:32:47.819 --rc genhtml_function_coverage=1 00:32:47.819 --rc genhtml_legend=1 00:32:47.819 --rc geninfo_all_blocks=1 00:32:47.819 --rc geninfo_unexecuted_blocks=1 00:32:47.819 00:32:47.819 ' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.819 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.820 11:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:55.971 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:55.972 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:55.972 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:55.972 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:55.972 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:55.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:32:55.972 00:32:55.972 --- 10.0.0.2 ping statistics --- 00:32:55.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.972 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:32:55.972 00:32:55.972 --- 10.0.0.1 ping statistics --- 00:32:55.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.972 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:55.972 only one NIC for nvmf test 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:55.972 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.973 rmmod nvme_tcp 00:32:55.973 rmmod nvme_fabrics 00:32:55.973 rmmod nvme_keyring 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.973 11:14:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.360 00:32:57.360 real 0m9.907s 00:32:57.360 user 0m2.154s 00:32:57.360 sys 0m5.701s 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:57.360 ************************************ 00:32:57.360 END TEST nvmf_target_multipath 00:32:57.360 ************************************ 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:57.360 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:57.623 ************************************ 00:32:57.623 START TEST nvmf_zcopy 00:32:57.623 ************************************ 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:57.623 * Looking for test storage... 00:32:57.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.623 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.624 --rc genhtml_branch_coverage=1 00:32:57.624 --rc genhtml_function_coverage=1 00:32:57.624 --rc genhtml_legend=1 00:32:57.624 --rc geninfo_all_blocks=1 00:32:57.624 --rc geninfo_unexecuted_blocks=1 00:32:57.624 00:32:57.624 ' 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.624 --rc genhtml_branch_coverage=1 00:32:57.624 --rc genhtml_function_coverage=1 00:32:57.624 --rc genhtml_legend=1 00:32:57.624 --rc geninfo_all_blocks=1 00:32:57.624 --rc geninfo_unexecuted_blocks=1 00:32:57.624 00:32:57.624 ' 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.624 --rc genhtml_branch_coverage=1 00:32:57.624 --rc genhtml_function_coverage=1 00:32:57.624 --rc genhtml_legend=1 00:32:57.624 --rc geninfo_all_blocks=1 00:32:57.624 --rc geninfo_unexecuted_blocks=1 00:32:57.624 00:32:57.624 ' 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:57.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.624 --rc genhtml_branch_coverage=1 00:32:57.624 --rc genhtml_function_coverage=1 00:32:57.624 --rc genhtml_legend=1 00:32:57.624 --rc geninfo_all_blocks=1 00:32:57.624 --rc geninfo_unexecuted_blocks=1 00:32:57.624 00:32:57.624 ' 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:57.624 11:14:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.624 11:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.781 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:05.782 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:05.782 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:05.782 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:05.782 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:05.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:33:05.782 00:33:05.782 --- 10.0.0.2 ping statistics --- 00:33:05.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.782 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:33:05.782 00:33:05.782 --- 10.0.0.1 ping statistics --- 00:33:05.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.782 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3511146 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3511146 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3511146 ']' 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:05.782 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.783 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:05.783 11:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:05.783 [2024-11-06 11:14:56.401421] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:05.783 [2024-11-06 11:14:56.403014] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:33:05.783 [2024-11-06 11:14:56.403086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.783 [2024-11-06 11:14:56.505554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.783 [2024-11-06 11:14:56.555373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.783 [2024-11-06 11:14:56.555423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.783 [2024-11-06 11:14:56.555432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.783 [2024-11-06 11:14:56.555440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.783 [2024-11-06 11:14:56.555446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.783 [2024-11-06 11:14:56.556187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.783 [2024-11-06 11:14:56.631898] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:05.783 [2024-11-06 11:14:56.632189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.045 [2024-11-06 11:14:57.261083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.045 [2024-11-06 11:14:57.289329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.045 malloc0 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:06.045 { 00:33:06.045 "params": { 00:33:06.045 "name": "Nvme$subsystem", 00:33:06.045 "trtype": "$TEST_TRANSPORT", 00:33:06.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:06.045 "adrfam": "ipv4", 00:33:06.045 "trsvcid": "$NVMF_PORT", 00:33:06.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:06.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:06.045 "hdgst": ${hdgst:-false}, 00:33:06.045 "ddgst": ${ddgst:-false} 00:33:06.045 }, 00:33:06.045 "method": "bdev_nvme_attach_controller" 00:33:06.045 } 00:33:06.045 EOF 00:33:06.045 )") 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:06.045 11:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:06.045 "params": { 00:33:06.045 "name": "Nvme1", 00:33:06.045 "trtype": "tcp", 00:33:06.045 "traddr": "10.0.0.2", 00:33:06.045 "adrfam": "ipv4", 00:33:06.045 "trsvcid": "4420", 00:33:06.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:06.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:06.045 "hdgst": false, 00:33:06.045 "ddgst": false 00:33:06.045 }, 00:33:06.045 "method": "bdev_nvme_attach_controller" 00:33:06.045 }' 00:33:06.045 [2024-11-06 11:14:57.374505] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:33:06.045 [2024-11-06 11:14:57.374566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511404 ] 00:33:06.045 [2024-11-06 11:14:57.445344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.306 [2024-11-06 11:14:57.482315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.567 Running I/O for 10 seconds... 00:33:08.450 6616.00 IOPS, 51.69 MiB/s [2024-11-06T10:15:00.812Z] 6655.00 IOPS, 51.99 MiB/s [2024-11-06T10:15:01.832Z] 6662.67 IOPS, 52.05 MiB/s [2024-11-06T10:15:02.774Z] 6673.50 IOPS, 52.14 MiB/s [2024-11-06T10:15:04.155Z] 6679.20 IOPS, 52.18 MiB/s [2024-11-06T10:15:05.095Z] 6789.17 IOPS, 53.04 MiB/s [2024-11-06T10:15:06.036Z] 7202.71 IOPS, 56.27 MiB/s [2024-11-06T10:15:06.977Z] 7513.50 IOPS, 58.70 MiB/s [2024-11-06T10:15:07.917Z] 7755.56 IOPS, 60.59 MiB/s [2024-11-06T10:15:07.917Z] 7950.40 IOPS, 62.11 MiB/s 00:33:16.495 Latency(us) 00:33:16.495 [2024-11-06T10:15:07.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.495 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:16.495 Verification LBA range: start 0x0 length 0x1000 00:33:16.495 Nvme1n1 : 10.01 7954.14 62.14 0.00 0.00 16039.71 1774.93 25886.72 00:33:16.495 [2024-11-06T10:15:07.917Z] =================================================================================================================== 00:33:16.495 [2024-11-06T10:15:07.917Z] Total : 7954.14 62.14 0.00 0.00 16039.71 1774.93 25886.72 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3513466 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.495 { 00:33:16.495 "params": { 00:33:16.495 "name": "Nvme$subsystem", 00:33:16.495 "trtype": "$TEST_TRANSPORT", 00:33:16.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.495 "adrfam": "ipv4", 00:33:16.495 "trsvcid": "$NVMF_PORT", 00:33:16.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.495 "hdgst": ${hdgst:-false}, 00:33:16.495 "ddgst": ${ddgst:-false} 00:33:16.495 }, 00:33:16.495 "method": "bdev_nvme_attach_controller" 00:33:16.495 } 00:33:16.495 EOF 00:33:16.495 )") 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:16.495 [2024-11-06 11:15:07.900591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.495 [2024-11-06 11:15:07.900619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:16.495 11:15:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.495 "params": { 00:33:16.495 "name": "Nvme1", 00:33:16.495 "trtype": "tcp", 00:33:16.495 "traddr": "10.0.0.2", 00:33:16.495 "adrfam": "ipv4", 00:33:16.495 "trsvcid": "4420", 00:33:16.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:16.495 "hdgst": false, 00:33:16.495 "ddgst": false 00:33:16.495 }, 00:33:16.495 "method": "bdev_nvme_attach_controller" 00:33:16.495 }' 00:33:16.495 [2024-11-06 11:15:07.912558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.495 [2024-11-06 11:15:07.912567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.755 [2024-11-06 11:15:07.924556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.755 [2024-11-06 11:15:07.924565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.755 [2024-11-06 11:15:07.936555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.755 [2024-11-06 11:15:07.936563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.755 [2024-11-06 11:15:07.942292] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:33:16.755 [2024-11-06 11:15:07.942342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513466 ] 00:33:16.755 [2024-11-06 11:15:07.948557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.755 [2024-11-06 11:15:07.948565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.755 [2024-11-06 11:15:07.960555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.755 [2024-11-06 11:15:07.960563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.755 [2024-11-06 11:15:07.972555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.755 [2024-11-06 11:15:07.972563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.755 [2024-11-06 11:15:07.984555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:07.984563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:07.996556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:07.996563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.008556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.008563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.011876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.756 [2024-11-06 11:15:08.020556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.020566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.032556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.032564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.044556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.044566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.047188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.756 [2024-11-06 11:15:08.056556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.056564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.068562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.068575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.080558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.080570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.092557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.092568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.104560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.104571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.116562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.116575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.128559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.128572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.140559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.140569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.152558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.152566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:16.756 [2024-11-06 11:15:08.164556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:16.756 [2024-11-06 11:15:08.164564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.176557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.176565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.188557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.188567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.200558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.200567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.212557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.212564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.224556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.224563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.236556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.236563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.248557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.248570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.260556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.260563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.272556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.272563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.284557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.284566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.296556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.296563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.308556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.308563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.320556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.320563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.332753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.332765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.344562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.344575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 Running I/O for 5 seconds... 00:33:17.017 [2024-11-06 11:15:08.359915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.359931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.373225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.373240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.388032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.388047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.017 [2024-11-06 11:15:08.401113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.017 [2024-11-06 11:15:08.401128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.018 [2024-11-06 11:15:08.415621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.018 [2024-11-06 11:15:08.415637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.018 [2024-11-06 11:15:08.428665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.018 [2024-11-06 11:15:08.428681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.441370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.441385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.455703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.455718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.468769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.468784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.481780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.481794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.495668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.495682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.508914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.508928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.523699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.523715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.536678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.536693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.549339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.549354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.564350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.564365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.577020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.577034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.592157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.592172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.605065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.605080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.619821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.619836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.632912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.632927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.647442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.647457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.660277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.660292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.279 [2024-11-06 11:15:08.673000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.279 [2024-11-06 11:15:08.673014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.280 [2024-11-06 11:15:08.688157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.280 [2024-11-06 11:15:08.688172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.701302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.701317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.715756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.715772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.728908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.728922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.743627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.743642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.756587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.756603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.769285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.769300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.783860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.783876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.796980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.796995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.811687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.811702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.824683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.824698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.837220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.837235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.851531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.851547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.864540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.864555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.877690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.877705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.891801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.891817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.904921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.904936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.919679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.919693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.932562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.932576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.945395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.945409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.542 [2024-11-06 11:15:08.960240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.542 [2024-11-06 11:15:08.960257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:08.973407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:08.973422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:08.986025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:08.986039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:08.999942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:08.999957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.012793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.012807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.025535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.025550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.040252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.040267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.053456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.053470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.067418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.067433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.080353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.080368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.092998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.093012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.107753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.107768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.120771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.120786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.133975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.133989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.147857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.147873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.160676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.160691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.173498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.173512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.187366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.187381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.200405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.200421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.804 [2024-11-06 11:15:09.213576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.804 [2024-11-06 11:15:09.213591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.227845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.227861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.240789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.240805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.253854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.253869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.267690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.267706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.280609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.280624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.294047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.294063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.307969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.307984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.321269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.321283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.335743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.335763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.348809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.348824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 19053.00 IOPS, 148.85 MiB/s [2024-11-06T10:15:09.488Z] [2024-11-06 11:15:09.361675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.361690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.375486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.375501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.388478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.388492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.401204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.401219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.416313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.416328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.429490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.429505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.443398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.443412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.456523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.456539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.469414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.469428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.066 [2024-11-06 11:15:09.483858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.066 [2024-11-06 11:15:09.483873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.496848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.496863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.511767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.511787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.525140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.525155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.539906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.539921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.552676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.552691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.565396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.565412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.579789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.579804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.592437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.592452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.605032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.605046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.619731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.619751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.632526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.632541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.645181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.645196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.659581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.659596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.672938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.672953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.687967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.687981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.700964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.700978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.715920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.715936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.729091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.729106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.327 [2024-11-06 11:15:09.743379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.327 [2024-11-06 11:15:09.743394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.756184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.756200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.769079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.769098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.783934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.783949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.796955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.796970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.811851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.811866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.824570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.824585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.837391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.837406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.851891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.851906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.864772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.864788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.877394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.877408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.891967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.891982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.905587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.905602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.919890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.919905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.933101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.933116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.947478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.947493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.960456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.960472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.973831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.973846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:09.988272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:09.988287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.588 [2024-11-06 11:15:10.001379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.588 [2024-11-06 11:15:10.001394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.848 [2024-11-06 11:15:10.015893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.848 [2024-11-06 11:15:10.015911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.848 [2024-11-06 11:15:10.028876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.848 [2024-11-06 11:15:10.028894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.848 [2024-11-06 11:15:10.044143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.848 [2024-11-06 11:15:10.044159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.848 [2024-11-06 11:15:10.056942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.848 [2024-11-06 11:15:10.056956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.848 [2024-11-06 11:15:10.071551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.848 [2024-11-06 11:15:10.071566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.848 [2024-11-06 11:15:10.084383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.848 [2024-11-06 11:15:10.084399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.848 [2024-11-06 11:15:10.097380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.097395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.111965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.111979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.124856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.124870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.139312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.139326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.152066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.152080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.164688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.164702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.178119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.178134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.191915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.191930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.204827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.204842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.219553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.219568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.232866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.232881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.247646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.247662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.849 [2024-11-06 11:15:10.260670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.849 [2024-11-06 11:15:10.260684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.109 [2024-11-06 11:15:10.273594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.109 [2024-11-06 11:15:10.273608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.109 [2024-11-06 11:15:10.287884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.109 [2024-11-06 11:15:10.287898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.109 [2024-11-06 11:15:10.300719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.109 [2024-11-06 11:15:10.300734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.109 [2024-11-06 11:15:10.313464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.109 [2024-11-06 11:15:10.313479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.328002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.328017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.340766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.340781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.353688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.353702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 19121.50 IOPS, 149.39 MiB/s [2024-11-06T10:15:10.532Z] [2024-11-06 11:15:10.367580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.367595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.380198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.380213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.392867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.392881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.407816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.407831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.420880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.420893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.435767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.435781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.448763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.448777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.461517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.461532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.475520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.475535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.487999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.488015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.501165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.501180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.516226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.516241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.110 [2024-11-06 11:15:10.529534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.110 [2024-11-06 11:15:10.529549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.543891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.543906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.556700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.556716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.569476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.569491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.583665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.583680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.596666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.596681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.609836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.609850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.624114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.624129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.637123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.637137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.652133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.652148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.665003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.665018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.679334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.679349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.692289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.692303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.704913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.704928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.719728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.719744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.732565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.732579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.745330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.745345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.759648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.759662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.772351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.772365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.371 [2024-11-06 11:15:10.785011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.371 [2024-11-06 11:15:10.785030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.632 [2024-11-06 11:15:10.800086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.632 [2024-11-06 11:15:10.800100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.632 [2024-11-06 11:15:10.813196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.813210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.827944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.827959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.841194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.841208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.855889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.855904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.868758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.868773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.882073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.882087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.895977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.895992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.908923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.908937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.923555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.923571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.936741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.936760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.949669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.949684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.963990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.964005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.977236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.977251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:10.991250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:10.991265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:11.004302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:11.004317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:11.016700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:11.016715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:11.029496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:11.029510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.633 [2024-11-06 11:15:11.043764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.633 [2024-11-06 11:15:11.043783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.056944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.893 [2024-11-06 11:15:11.056959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.071986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.893 [2024-11-06 11:15:11.072001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.084991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.893 [2024-11-06 11:15:11.085005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.099237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.893 [2024-11-06 11:15:11.099251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.112282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.893 [2024-11-06 11:15:11.112296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.125587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.893 [2024-11-06 11:15:11.125602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.139483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.893 [2024-11-06 11:15:11.139498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.893 [2024-11-06 11:15:11.152518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.152532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.165007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.165021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.179381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.179396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.192560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.192575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.205661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.205676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.220528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.220542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.233305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.233320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.247589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.247604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.260594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.260609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.273526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.273541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.287957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.287972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.894 [2024-11-06 11:15:11.301166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.894 [2024-11-06 11:15:11.301184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.316131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.316146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.329322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.329337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.343404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.343419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.356400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.356414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 19141.67 IOPS, 149.54 MiB/s [2024-11-06T10:15:11.576Z] [2024-11-06 11:15:11.369598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.369613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.383852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.383868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.396565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.396580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.409075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.409090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.423995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.424010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.436975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.436989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.451789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.451804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.464592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.464607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.477717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.477732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.491804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.491819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.504326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.504341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.517596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.517611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.532203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.532218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.545315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.545331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.559695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.559711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.154 [2024-11-06 11:15:11.572669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.154 [2024-11-06 11:15:11.572684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.585989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.586004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.599896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.599910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.612928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.612943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.627304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.627319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.640346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.640361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.653506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.653520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.667820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.667835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.681024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.681038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.696013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.696028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.709286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.709301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.723818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.723832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.737000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.737014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.751960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.751975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.765136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.765151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.780188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.780202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.793126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.793140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.807405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.807420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.820539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.820554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.415 [2024-11-06 11:15:11.833568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.415 [2024-11-06 11:15:11.833582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.848257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.848271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.861633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.861647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.875603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.875617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.888924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.888938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.903543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.903558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.916735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.916754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.929703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.929718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.943426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.943441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.956184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.956200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.675 [2024-11-06 11:15:11.969069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.675 [2024-11-06 11:15:11.969083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:11.983902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:11.983917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:11.996938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:11.996953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:12.011629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:12.011644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:12.024879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:12.024893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:12.039731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:12.039752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:12.052636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:12.052651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:12.065556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:12.065571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:12.079316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:12.079331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.676 [2024-11-06 11:15:12.092202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.676 [2024-11-06 11:15:12.092217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.104809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.104824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.117754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.117768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.131664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.131679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.144859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.144873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.159902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.159917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.172818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.172832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.185449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.185464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.200371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.200386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.213433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.213447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.227881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.227896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.240814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.240829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.253755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.253770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.267603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.267618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.280719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.280734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.293347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.293362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.308385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.308400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.321207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.321221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.335323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.335338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.936 [2024-11-06 11:15:12.348444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.936 [2024-11-06 11:15:12.348458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.361523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.361538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 19138.50 IOPS, 149.52 MiB/s [2024-11-06T10:15:12.620Z] [2024-11-06 11:15:12.376145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.376159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.389404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.389419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.403873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.403887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.417112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.417126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.432050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.432065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.445196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.445211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.459645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.459660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.472690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.472704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.485501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.485515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.499706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.499721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.512898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.512912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.527407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.527422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.540328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.540342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.553278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.553293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.567797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.567812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.581032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.581051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.595923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.595938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.198 [2024-11-06 11:15:12.608875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.198 [2024-11-06 11:15:12.608889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.459 [2024-11-06 11:15:12.624206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.459 [2024-11-06 11:15:12.624222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.459 [2024-11-06 11:15:12.637201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.459 [2024-11-06 11:15:12.637215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.459 [2024-11-06 11:15:12.651573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.459 [2024-11-06 11:15:12.651588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.459 [2024-11-06 11:15:12.664654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.459 [2024-11-06 11:15:12.664669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.677846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.677861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.691831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.691846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.704780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.704796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.717826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.717841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.731832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.731846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.744888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.744902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.759705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.759720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.772718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.772733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.785471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.785486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.799902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.799918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.812725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.812740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.825322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.825336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.839127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.839146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.852232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.852247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.460 [2024-11-06 11:15:12.865364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.460 [2024-11-06 11:15:12.865378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.880033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.880049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.893008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.893022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.907973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.907989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.921000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.921014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.935917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.935932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.949102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.949116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.963648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.963663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.976657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.976672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:12.989293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:12.989307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.003273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.003288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.016369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.016384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.028979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.028993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.044138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.044153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.056915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.056930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.071808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.071823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.084937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.084951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.099553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.099575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.112769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.112784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.125862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.125876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.721 [2024-11-06 11:15:13.140018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.721 [2024-11-06 11:15:13.140033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.152769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.152784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.164984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.164998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.179885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.179899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.193228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.193243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.207492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.207507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.220171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.220186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.233495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.233510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.247755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.247769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.260887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.260902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.276264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.276279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.289539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.289554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.303925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.303940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.316971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.316986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.332098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.332113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.345112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.345127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.359548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.359563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 19129.00 IOPS, 149.45 MiB/s [2024-11-06T10:15:13.405Z] [2024-11-06 11:15:13.368564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.368578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 00:33:21.983 Latency(us) 00:33:21.983 [2024-11-06T10:15:13.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.983 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:21.983 Nvme1n1 : 5.01 19134.97 149.49 0.00 0.00 6683.59 2662.40 11960.32 00:33:21.983 [2024-11-06T10:15:13.405Z] =================================================================================================================== 00:33:21.983 [2024-11-06T10:15:13.405Z] Total : 19134.97 149.49 0.00 0.00 6683.59 2662.40 11960.32 00:33:21.983 [2024-11-06 11:15:13.380560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.380574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.983 [2024-11-06 11:15:13.392564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.983 [2024-11-06 11:15:13.392577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 [2024-11-06 11:15:13.404563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.243 [2024-11-06 11:15:13.404578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 [2024-11-06 11:15:13.416559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.243 [2024-11-06 11:15:13.416569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 [2024-11-06 11:15:13.428559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.243 [2024-11-06 11:15:13.428569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 [2024-11-06 11:15:13.440556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.243 [2024-11-06 11:15:13.440565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 [2024-11-06 11:15:13.452556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.243 [2024-11-06 11:15:13.452564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 [2024-11-06 11:15:13.464559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.243 [2024-11-06 11:15:13.464569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 [2024-11-06 11:15:13.476556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.243 [2024-11-06 11:15:13.476563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3513466) - No such process 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3513466 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:22.243 delay0 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.243 11:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:22.243 [2024-11-06 11:15:13.662933] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:30.381 Initializing NVMe Controllers 00:33:30.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:30.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:30.381 Initialization complete. Launching workers. 00:33:30.381 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5092 00:33:30.381 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5369, failed to submit 43 00:33:30.381 success 5204, unsuccessful 165, failed 0 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:30.381 rmmod nvme_tcp 00:33:30.381 rmmod nvme_fabrics 00:33:30.381 rmmod nvme_keyring 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3511146 ']' 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3511146 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3511146 ']' 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3511146 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3511146 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3511146' 00:33:30.381 killing process with pid 3511146 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3511146 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3511146 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.381 11:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:31.768 00:33:31.768 real 0m34.064s 00:33:31.768 user 0m44.125s 00:33:31.768 sys 0m11.833s 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.768 ************************************ 00:33:31.768 END TEST nvmf_zcopy 00:33:31.768 ************************************ 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:31.768 ************************************ 00:33:31.768 START TEST nvmf_nmic 00:33:31.768 ************************************ 00:33:31.768 11:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:31.768 * Looking for test storage... 00:33:31.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:31.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.768 --rc genhtml_branch_coverage=1 00:33:31.768 --rc genhtml_function_coverage=1 00:33:31.768 --rc genhtml_legend=1 00:33:31.768 --rc geninfo_all_blocks=1 00:33:31.768 --rc geninfo_unexecuted_blocks=1 00:33:31.768 00:33:31.768 ' 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:31.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.768 --rc genhtml_branch_coverage=1 00:33:31.768 --rc genhtml_function_coverage=1 00:33:31.768 --rc genhtml_legend=1 00:33:31.768 --rc geninfo_all_blocks=1 00:33:31.768 --rc geninfo_unexecuted_blocks=1 00:33:31.768 00:33:31.768 ' 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:31.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.768 --rc genhtml_branch_coverage=1 00:33:31.768 --rc genhtml_function_coverage=1 00:33:31.768 --rc genhtml_legend=1 00:33:31.768 --rc geninfo_all_blocks=1 00:33:31.768 --rc geninfo_unexecuted_blocks=1 00:33:31.768 00:33:31.768 ' 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:31.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.768 --rc genhtml_branch_coverage=1 00:33:31.768 --rc genhtml_function_coverage=1 00:33:31.768 --rc genhtml_legend=1 00:33:31.768 --rc geninfo_all_blocks=1 00:33:31.768 --rc geninfo_unexecuted_blocks=1 00:33:31.768 00:33:31.768 ' 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:31.768 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.769 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.031 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:32.031 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:32.031 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:32.031 11:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:40.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:40.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:40.174 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:40.175 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:40.175 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:40.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:33:40.175 00:33:40.175 --- 10.0.0.2 ping statistics --- 00:33:40.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.175 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:40.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:33:40.175 00:33:40.175 --- 10.0.0.1 ping statistics --- 00:33:40.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.175 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3520401 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3520401 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3520401 ']' 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:40.175 11:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.175 [2024-11-06 11:15:30.565575] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:40.175 [2024-11-06 11:15:30.566754] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:33:40.175 [2024-11-06 11:15:30.566807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.175 [2024-11-06 11:15:30.650454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.175 [2024-11-06 11:15:30.694611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.175 [2024-11-06 11:15:30.694649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.175 [2024-11-06 11:15:30.694657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.175 [2024-11-06 11:15:30.694664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.175 [2024-11-06 11:15:30.694670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.175 [2024-11-06 11:15:30.696425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.175 [2024-11-06 11:15:30.696715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.175 [2024-11-06 11:15:30.696547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.175 [2024-11-06 11:15:30.696715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:40.175 [2024-11-06 11:15:30.753148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:40.175 [2024-11-06 11:15:30.753470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:40.175 [2024-11-06 11:15:30.754558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:40.175 [2024-11-06 11:15:30.754618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:40.175 [2024-11-06 11:15:30.754811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.175 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 [2024-11-06 11:15:31.421495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 Malloc0 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 [2024-11-06 11:15:31.497408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:40.176 test case1: single bdev can't be used in multiple subsystems 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 [2024-11-06 11:15:31.533158] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:40.176 [2024-11-06 11:15:31.533177] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:40.176 [2024-11-06 11:15:31.533185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.176 request: 00:33:40.176 { 00:33:40.176 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:40.176 "namespace": { 00:33:40.176 "bdev_name": "Malloc0", 00:33:40.176 "no_auto_visible": false 00:33:40.176 }, 00:33:40.176 "method": "nvmf_subsystem_add_ns", 00:33:40.176 "req_id": 1 00:33:40.176 } 00:33:40.176 Got JSON-RPC error response 00:33:40.176 response: 00:33:40.176 { 00:33:40.176 "code": -32602, 00:33:40.176 "message": "Invalid parameters" 00:33:40.176 } 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:40.176 Adding namespace failed - expected result. 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:40.176 test case2: host connect to nvmf target in multiple paths 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.176 [2024-11-06 11:15:31.545266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.176 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:40.746 11:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:41.316 11:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:41.316 11:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:33:41.316 11:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:41.316 11:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:33:41.316 11:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:33:43.230 11:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:43.230 11:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:43.230 11:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:43.230 11:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:33:43.230 11:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:43.230 11:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:33:43.230 11:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:43.230 [global] 00:33:43.230 thread=1 00:33:43.230 invalidate=1 00:33:43.230 rw=write 00:33:43.230 time_based=1 00:33:43.230 runtime=1 00:33:43.230 ioengine=libaio 00:33:43.230 direct=1 00:33:43.230 bs=4096 00:33:43.230 iodepth=1 00:33:43.230 norandommap=0 00:33:43.230 numjobs=1 00:33:43.230 00:33:43.230 verify_dump=1 00:33:43.230 verify_backlog=512 00:33:43.230 verify_state_save=0 00:33:43.230 do_verify=1 00:33:43.230 verify=crc32c-intel 00:33:43.230 [job0] 00:33:43.230 filename=/dev/nvme0n1 00:33:43.230 Could not set queue depth (nvme0n1) 00:33:43.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:43.491 fio-3.35 00:33:43.491 Starting 1 thread 00:33:44.880 00:33:44.880 job0: (groupid=0, jobs=1): err= 0: pid=3521521: Wed Nov 6 11:15:35 2024 00:33:44.880 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1019msec) 00:33:44.880 slat (nsec): min=10247, max=29798, avg=27090.82, stdev=4378.99 00:33:44.880 clat (usec): min=1024, max=42059, avg=39420.79, stdev=9900.06 00:33:44.880 lat (usec): min=1034, max=42087, avg=39447.88, stdev=9904.39 00:33:44.880 clat percentiles (usec): 00:33:44.880 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[41157], 20.00th=[41681], 00:33:44.880 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:44.880 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:44.880 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:44.880 | 99.99th=[42206] 00:33:44.880 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:33:44.880 slat (usec): min=9, max=28682, avg=87.41, stdev=1266.27 00:33:44.880 clat (usec): min=244, max=842, avg=583.53, stdev=99.76 00:33:44.880 lat (usec): min=255, max=29312, avg=670.94, stdev=1272.58 00:33:44.880 clat percentiles (usec): 00:33:44.880 | 1.00th=[ 330], 5.00th=[ 396], 10.00th=[ 449], 20.00th=[ 498], 00:33:44.880 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 603], 00:33:44.880 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 734], 00:33:44.880 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 840], 99.95th=[ 840], 00:33:44.880 | 99.99th=[ 840] 00:33:44.880 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:44.880 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:44.880 lat (usec) : 250=0.19%, 500=19.66%, 750=75.24%, 1000=1.70% 00:33:44.880 lat (msec) : 2=0.19%, 50=3.02% 00:33:44.880 cpu : usr=0.69%, sys=2.26%, ctx=533, majf=0, minf=1 00:33:44.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:44.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.880 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:44.880 00:33:44.880 Run status group 0 (all jobs): 00:33:44.880 READ: bw=66.7KiB/s (68.3kB/s), 66.7KiB/s-66.7KiB/s (68.3kB/s-68.3kB/s), io=68.0KiB (69.6kB), run=1019-1019msec 00:33:44.880 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:33:44.880 00:33:44.880 Disk stats (read/write): 00:33:44.880 nvme0n1: ios=40/512, merge=0/0, ticks=1537/235, in_queue=1772, util=98.70% 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:44.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:44.880 rmmod nvme_tcp 00:33:44.880 rmmod nvme_fabrics 00:33:44.880 rmmod nvme_keyring 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3520401 ']' 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3520401 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3520401 ']' 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3520401 00:33:44.880 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3520401 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3520401' 00:33:45.142 killing process with pid 3520401 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3520401 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3520401 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.142 11:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:47.693 00:33:47.693 real 0m15.627s 00:33:47.693 user 0m35.694s 00:33:47.693 sys 0m7.368s 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:47.693 ************************************ 00:33:47.693 END TEST nvmf_nmic 00:33:47.693 ************************************ 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:47.693 ************************************ 00:33:47.693 START TEST nvmf_fio_target 00:33:47.693 ************************************ 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:47.693 * Looking for test storage... 00:33:47.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.693 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.694 --rc genhtml_branch_coverage=1 00:33:47.694 --rc genhtml_function_coverage=1 00:33:47.694 --rc genhtml_legend=1 00:33:47.694 --rc geninfo_all_blocks=1 00:33:47.694 --rc geninfo_unexecuted_blocks=1 00:33:47.694 00:33:47.694 ' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.694 --rc genhtml_branch_coverage=1 00:33:47.694 --rc genhtml_function_coverage=1 00:33:47.694 --rc genhtml_legend=1 00:33:47.694 --rc geninfo_all_blocks=1 00:33:47.694 --rc geninfo_unexecuted_blocks=1 00:33:47.694 00:33:47.694 ' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.694 --rc genhtml_branch_coverage=1 00:33:47.694 --rc genhtml_function_coverage=1 00:33:47.694 --rc genhtml_legend=1 00:33:47.694 --rc geninfo_all_blocks=1 00:33:47.694 --rc geninfo_unexecuted_blocks=1 00:33:47.694 00:33:47.694 ' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.694 --rc genhtml_branch_coverage=1 00:33:47.694 --rc genhtml_function_coverage=1 00:33:47.694 --rc genhtml_legend=1 00:33:47.694 --rc geninfo_all_blocks=1 00:33:47.694 --rc geninfo_unexecuted_blocks=1 00:33:47.694 00:33:47.694 ' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:47.694 11:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:55.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:55.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:55.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:55.839 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.839 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:55.840 11:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:55.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:33:55.840 00:33:55.840 --- 10.0.0.2 ping statistics --- 00:33:55.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.840 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:55.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:33:55.840 00:33:55.840 --- 10.0.0.1 ping statistics --- 00:33:55.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.840 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3525926 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3525926 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3525926 ']' 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:55.840 11:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.840 [2024-11-06 11:15:46.256052] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:55.840 [2024-11-06 11:15:46.257227] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:33:55.840 [2024-11-06 11:15:46.257280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.840 [2024-11-06 11:15:46.339807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:55.840 [2024-11-06 11:15:46.381578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.840 [2024-11-06 11:15:46.381613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.840 [2024-11-06 11:15:46.381621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.840 [2024-11-06 11:15:46.381628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.840 [2024-11-06 11:15:46.381634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.840 [2024-11-06 11:15:46.383190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.840 [2024-11-06 11:15:46.383306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.840 [2024-11-06 11:15:46.383462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.840 [2024-11-06 11:15:46.383462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:55.840 [2024-11-06 11:15:46.440093] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:55.840 [2024-11-06 11:15:46.440100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:55.840 [2024-11-06 11:15:46.441159] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:55.840 [2024-11-06 11:15:46.441966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:55.840 [2024-11-06 11:15:46.442039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:55.840 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:55.840 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:33:55.840 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:55.840 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:55.840 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.840 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.840 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:56.100 [2024-11-06 11:15:47.259990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.100 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.100 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:56.100 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.360 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:56.360 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.619 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:56.619 11:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.619 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:56.619 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:56.879 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.139 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:57.139 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.139 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:57.139 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.400 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:57.400 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:57.661 11:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:57.661 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:57.661 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:57.922 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:57.922 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:58.182 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.182 [2024-11-06 11:15:49.524073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.182 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:58.471 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:58.775 11:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:59.060 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:59.060 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:33:59.060 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:59.060 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:33:59.060 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:33:59.060 11:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:00.974 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:00.974 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:00.974 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:00.974 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:00.974 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:00.974 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:00.974 11:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:01.235 [global] 00:34:01.235 thread=1 00:34:01.235 invalidate=1 00:34:01.235 rw=write 00:34:01.235 time_based=1 00:34:01.235 runtime=1 00:34:01.235 ioengine=libaio 00:34:01.235 direct=1 00:34:01.235 bs=4096 00:34:01.235 iodepth=1 00:34:01.235 norandommap=0 00:34:01.235 numjobs=1 00:34:01.235 00:34:01.235 verify_dump=1 00:34:01.235 verify_backlog=512 00:34:01.235 verify_state_save=0 00:34:01.235 do_verify=1 00:34:01.235 verify=crc32c-intel 00:34:01.235 [job0] 00:34:01.235 filename=/dev/nvme0n1 00:34:01.235 [job1] 00:34:01.235 filename=/dev/nvme0n2 00:34:01.235 [job2] 00:34:01.235 filename=/dev/nvme0n3 00:34:01.235 [job3] 00:34:01.235 filename=/dev/nvme0n4 00:34:01.235 Could not set queue depth (nvme0n1) 00:34:01.235 Could not set queue depth (nvme0n2) 00:34:01.235 Could not set queue depth (nvme0n3) 00:34:01.235 Could not set queue depth (nvme0n4) 00:34:01.496 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.497 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.497 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.497 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.497 fio-3.35 00:34:01.497 Starting 4 threads 00:34:02.885 00:34:02.885 job0: (groupid=0, jobs=1): err= 0: pid=3527370: Wed Nov 6 11:15:54 2024 00:34:02.885 read: IOPS=16, BW=67.8KiB/s (69.4kB/s)(68.0KiB/1003msec) 00:34:02.885 slat (nsec): min=26853, max=27714, avg=27175.88, stdev=222.24 00:34:02.885 clat (usec): min=983, max=42075, avg=39344.60, stdev=9891.54 00:34:02.885 lat (usec): min=1011, max=42102, avg=39371.78, stdev=9891.40 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[41157], 20.00th=[41157], 00:34:02.885 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:02.885 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:02.885 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:02.885 | 99.99th=[42206] 00:34:02.885 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:34:02.885 slat (nsec): min=9230, max=66135, avg=31440.78, stdev=9890.03 00:34:02.885 clat (usec): min=258, max=1263, avg=612.13, stdev=130.59 00:34:02.885 lat (usec): min=269, max=1298, avg=643.57, stdev=134.43 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 449], 20.00th=[ 506], 00:34:02.885 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:34:02.885 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 816], 00:34:02.885 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 1270], 99.95th=[ 1270], 00:34:02.885 | 99.99th=[ 1270] 00:34:02.885 bw ( KiB/s): min= 4096, max= 4096, per=45.31%, avg=4096.00, stdev= 0.00, samples=1 00:34:02.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:02.885 lat (usec) : 500=18.34%, 750=65.97%, 1000=12.48% 00:34:02.885 lat (msec) : 2=0.19%, 50=3.02% 00:34:02.885 cpu : usr=1.00%, sys=2.10%, ctx=532, majf=0, minf=1 00:34:02.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.885 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.885 job1: (groupid=0, jobs=1): err= 0: pid=3527374: Wed Nov 6 11:15:54 2024 00:34:02.885 read: IOPS=18, BW=75.1KiB/s (76.9kB/s)(76.0KiB/1012msec) 00:34:02.885 slat (nsec): min=7798, max=26900, avg=24422.74, stdev=5575.35 00:34:02.885 clat (usec): min=1071, max=42231, avg=39738.12, stdev=9367.18 00:34:02.885 lat (usec): min=1080, max=42257, avg=39762.54, stdev=9370.82 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:34:02.885 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:02.885 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:02.885 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:02.885 | 99.99th=[42206] 00:34:02.885 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:34:02.885 slat (nsec): min=9011, max=55824, avg=22830.93, stdev=12667.91 00:34:02.885 clat (usec): min=125, max=1003, avg=471.58, stdev=166.78 00:34:02.885 lat (usec): min=136, max=1036, avg=494.41, stdev=174.37 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 169], 5.00th=[ 245], 10.00th=[ 285], 20.00th=[ 314], 00:34:02.885 | 30.00th=[ 351], 40.00th=[ 400], 50.00th=[ 445], 60.00th=[ 519], 00:34:02.885 | 70.00th=[ 570], 80.00th=[ 619], 90.00th=[ 693], 95.00th=[ 750], 00:34:02.885 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:02.885 | 99.99th=[ 1004] 00:34:02.885 bw ( KiB/s): min= 4096, max= 4096, per=45.31%, avg=4096.00, stdev= 0.00, samples=1 00:34:02.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:02.885 lat (usec) : 250=5.08%, 500=48.96%, 750=37.66%, 1000=4.52% 00:34:02.885 lat (msec) : 2=0.38%, 50=3.39% 00:34:02.885 cpu : usr=0.69%, sys=1.19%, ctx=533, majf=0, minf=2 00:34:02.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.885 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.885 job2: (groupid=0, jobs=1): err= 0: pid=3527393: Wed Nov 6 11:15:54 2024 00:34:02.885 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:02.885 slat (nsec): min=8306, max=62883, avg=29029.98, stdev=3784.92 00:34:02.885 clat (usec): min=641, max=1311, avg=1025.53, stdev=98.20 00:34:02.885 lat (usec): min=669, max=1339, avg=1054.56, stdev=97.97 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 758], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 955], 00:34:02.885 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1057], 00:34:02.885 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:34:02.885 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:02.885 | 99.99th=[ 1319] 00:34:02.885 write: IOPS=630, BW=2521KiB/s (2582kB/s)(2524KiB/1001msec); 0 zone resets 00:34:02.885 slat (usec): min=9, max=30018, avg=83.08, stdev=1194.37 00:34:02.885 clat (usec): min=251, max=985, avg=631.02, stdev=133.90 00:34:02.885 lat (usec): min=265, max=30808, avg=714.10, stdev=1208.72 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 314], 5.00th=[ 388], 10.00th=[ 457], 20.00th=[ 515], 00:34:02.885 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 668], 00:34:02.885 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 840], 00:34:02.885 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:34:02.885 | 99.99th=[ 988] 00:34:02.885 bw ( KiB/s): min= 4096, max= 4096, per=45.31%, avg=4096.00, stdev= 0.00, samples=1 00:34:02.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:02.885 lat (usec) : 500=9.80%, 750=35.78%, 1000=28.08% 00:34:02.885 lat (msec) : 2=26.33% 00:34:02.885 cpu : usr=2.10%, sys=5.10%, ctx=1147, majf=0, minf=1 00:34:02.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.885 issued rwts: total=512,631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.885 job3: (groupid=0, jobs=1): err= 0: pid=3527399: Wed Nov 6 11:15:54 2024 00:34:02.885 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:02.885 slat (nsec): min=8260, max=63950, avg=28765.03, stdev=3392.64 00:34:02.885 clat (usec): min=726, max=1373, avg=1109.69, stdev=99.96 00:34:02.885 lat (usec): min=755, max=1401, avg=1138.46, stdev=100.21 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 799], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1045], 00:34:02.885 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:34:02.885 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:34:02.885 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1369], 99.95th=[ 1369], 00:34:02.885 | 99.99th=[ 1369] 00:34:02.885 write: IOPS=631, BW=2525KiB/s (2586kB/s)(2528KiB/1001msec); 0 zone resets 00:34:02.885 slat (nsec): min=9748, max=60577, avg=31409.02, stdev=10362.73 00:34:02.885 clat (usec): min=131, max=1108, avg=613.72, stdev=148.03 00:34:02.885 lat (usec): min=143, max=1143, avg=645.13, stdev=151.36 00:34:02.885 clat percentiles (usec): 00:34:02.885 | 1.00th=[ 293], 5.00th=[ 363], 10.00th=[ 433], 20.00th=[ 494], 00:34:02.886 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 652], 00:34:02.886 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 807], 95.00th=[ 873], 00:34:02.886 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1106], 99.95th=[ 1106], 00:34:02.886 | 99.99th=[ 1106] 00:34:02.886 bw ( KiB/s): min= 4096, max= 4096, per=45.31%, avg=4096.00, stdev= 0.00, samples=1 00:34:02.886 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:02.886 lat (usec) : 250=0.17%, 500=11.54%, 750=35.05%, 1000=14.95% 00:34:02.886 lat (msec) : 2=38.29% 00:34:02.886 cpu : usr=2.20%, sys=4.20%, ctx=1146, majf=0, minf=1 00:34:02.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.886 issued rwts: total=512,632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.886 00:34:02.886 Run status group 0 (all jobs): 00:34:02.886 READ: bw=4190KiB/s (4290kB/s), 67.8KiB/s-2046KiB/s (69.4kB/s-2095kB/s), io=4240KiB (4342kB), run=1001-1012msec 00:34:02.886 WRITE: bw=9040KiB/s (9256kB/s), 2024KiB/s-2525KiB/s (2072kB/s-2586kB/s), io=9148KiB (9368kB), run=1001-1012msec 00:34:02.886 00:34:02.886 Disk stats (read/write): 00:34:02.886 nvme0n1: ios=68/512, merge=0/0, ticks=758/245, in_queue=1003, util=86.57% 00:34:02.886 nvme0n2: ios=58/512, merge=0/0, ticks=880/224, in_queue=1104, util=89.89% 00:34:02.886 nvme0n3: ios=496/512, merge=0/0, ticks=799/264, in_queue=1063, util=94.18% 00:34:02.886 nvme0n4: ios=492/512, merge=0/0, ticks=733/298, in_queue=1031, util=94.11% 00:34:02.886 11:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:02.886 [global] 00:34:02.886 thread=1 00:34:02.886 invalidate=1 00:34:02.886 rw=randwrite 00:34:02.886 time_based=1 00:34:02.886 runtime=1 00:34:02.886 ioengine=libaio 00:34:02.886 direct=1 00:34:02.886 bs=4096 00:34:02.886 iodepth=1 00:34:02.886 norandommap=0 00:34:02.886 numjobs=1 00:34:02.886 00:34:02.886 verify_dump=1 00:34:02.886 verify_backlog=512 00:34:02.886 verify_state_save=0 00:34:02.886 do_verify=1 00:34:02.886 verify=crc32c-intel 00:34:02.886 [job0] 00:34:02.886 filename=/dev/nvme0n1 00:34:02.886 [job1] 00:34:02.886 filename=/dev/nvme0n2 00:34:02.886 [job2] 00:34:02.886 filename=/dev/nvme0n3 00:34:02.886 [job3] 00:34:02.886 filename=/dev/nvme0n4 00:34:02.886 Could not set queue depth (nvme0n1) 00:34:02.886 Could not set queue depth (nvme0n2) 00:34:02.886 Could not set queue depth (nvme0n3) 00:34:02.886 Could not set queue depth (nvme0n4) 00:34:03.148 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.148 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.148 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.148 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.148 fio-3.35 00:34:03.148 Starting 4 threads 00:34:04.535 00:34:04.535 job0: (groupid=0, jobs=1): err= 0: pid=3527829: Wed Nov 6 11:15:55 2024 00:34:04.535 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:04.535 slat (nsec): min=7053, max=61148, avg=27368.51, stdev=4228.50 00:34:04.535 clat (usec): min=650, max=1315, avg=1027.04, stdev=79.52 00:34:04.535 lat (usec): min=677, max=1342, avg=1054.41, stdev=79.42 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[ 791], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 971], 00:34:04.535 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:34:04.535 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:34:04.535 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:04.535 | 99.99th=[ 1319] 00:34:04.535 write: IOPS=684, BW=2737KiB/s (2803kB/s)(2740KiB/1001msec); 0 zone resets 00:34:04.535 slat (nsec): min=8870, max=66421, avg=30485.92, stdev=8502.27 00:34:04.535 clat (usec): min=276, max=974, avg=627.19, stdev=124.79 00:34:04.535 lat (usec): min=286, max=1007, avg=657.68, stdev=127.41 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[ 338], 5.00th=[ 420], 10.00th=[ 469], 20.00th=[ 515], 00:34:04.535 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 668], 00:34:04.535 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 824], 00:34:04.535 | 99.00th=[ 898], 99.50th=[ 947], 99.90th=[ 971], 99.95th=[ 971], 00:34:04.535 | 99.99th=[ 971] 00:34:04.535 bw ( KiB/s): min= 4096, max= 4096, per=41.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:04.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:04.535 lat (usec) : 500=9.19%, 750=38.51%, 1000=23.39% 00:34:04.535 lat (msec) : 2=28.91% 00:34:04.535 cpu : usr=3.00%, sys=4.30%, ctx=1198, majf=0, minf=1 00:34:04.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 issued rwts: total=512,685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.535 job1: (groupid=0, jobs=1): err= 0: pid=3527840: Wed Nov 6 11:15:55 2024 00:34:04.535 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:04.535 slat (nsec): min=24136, max=56588, avg=25274.00, stdev=2977.62 00:34:04.535 clat (usec): min=660, max=1250, avg=1026.86, stdev=91.83 00:34:04.535 lat (usec): min=696, max=1275, avg=1052.14, stdev=91.75 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[ 783], 5.00th=[ 832], 10.00th=[ 906], 20.00th=[ 963], 00:34:04.535 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1037], 60.00th=[ 1057], 00:34:04.535 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:34:04.535 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:34:04.535 | 99.99th=[ 1254] 00:34:04.535 write: IOPS=707, BW=2829KiB/s (2897kB/s)(2832KiB/1001msec); 0 zone resets 00:34:04.535 slat (nsec): min=9100, max=65819, avg=26502.88, stdev=9570.18 00:34:04.535 clat (usec): min=283, max=1623, avg=612.27, stdev=135.09 00:34:04.535 lat (usec): min=292, max=1666, avg=638.77, stdev=138.01 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 494], 00:34:04.535 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:34:04.535 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 816], 00:34:04.535 | 99.00th=[ 906], 99.50th=[ 955], 99.90th=[ 1631], 99.95th=[ 1631], 00:34:04.535 | 99.99th=[ 1631] 00:34:04.535 bw ( KiB/s): min= 4096, max= 4096, per=41.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:04.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:04.535 lat (usec) : 500=12.87%, 750=37.87%, 1000=19.26% 00:34:04.535 lat (msec) : 2=30.00% 00:34:04.535 cpu : usr=1.50%, sys=3.50%, ctx=1220, majf=0, minf=1 00:34:04.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 issued rwts: total=512,708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.535 job2: (groupid=0, jobs=1): err= 0: pid=3527857: Wed Nov 6 11:15:55 2024 00:34:04.535 read: IOPS=15, BW=63.0KiB/s (64.5kB/s)(64.0KiB/1016msec) 00:34:04.535 slat (nsec): min=27286, max=28644, avg=27972.19, stdev=375.61 00:34:04.535 clat (usec): min=40980, max=42127, avg=41629.69, stdev=389.89 00:34:04.535 lat (usec): min=41008, max=42155, avg=41657.66, stdev=390.01 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:04.535 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:04.535 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:04.535 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:04.535 | 99.99th=[42206] 00:34:04.535 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:34:04.535 slat (nsec): min=9330, max=54716, avg=33026.70, stdev=8274.09 00:34:04.535 clat (usec): min=217, max=1087, avg=639.51, stdev=136.99 00:34:04.535 lat (usec): min=253, max=1122, avg=672.54, stdev=139.45 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[ 314], 5.00th=[ 404], 10.00th=[ 474], 20.00th=[ 519], 00:34:04.535 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:34:04.535 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 816], 95.00th=[ 857], 00:34:04.535 | 99.00th=[ 955], 99.50th=[ 963], 99.90th=[ 1090], 99.95th=[ 1090], 00:34:04.535 | 99.99th=[ 1090] 00:34:04.535 bw ( KiB/s): min= 4096, max= 4096, per=41.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:04.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:04.535 lat (usec) : 250=0.19%, 500=14.20%, 750=63.45%, 1000=18.75% 00:34:04.535 lat (msec) : 2=0.38%, 50=3.03% 00:34:04.535 cpu : usr=1.18%, sys=2.07%, ctx=530, majf=0, minf=1 00:34:04.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.535 job3: (groupid=0, jobs=1): err= 0: pid=3527863: Wed Nov 6 11:15:55 2024 00:34:04.535 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:04.535 slat (nsec): min=27312, max=62964, avg=28700.93, stdev=3238.38 00:34:04.535 clat (usec): min=905, max=1392, avg=1117.58, stdev=83.29 00:34:04.535 lat (usec): min=933, max=1421, avg=1146.28, stdev=83.15 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[ 922], 5.00th=[ 988], 10.00th=[ 1012], 20.00th=[ 1057], 00:34:04.535 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1139], 00:34:04.535 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1254], 00:34:04.535 | 99.00th=[ 1319], 99.50th=[ 1352], 99.90th=[ 1401], 99.95th=[ 1401], 00:34:04.535 | 99.99th=[ 1401] 00:34:04.535 write: IOPS=617, BW=2470KiB/s (2529kB/s)(2472KiB/1001msec); 0 zone resets 00:34:04.535 slat (nsec): min=9325, max=54638, avg=31802.10, stdev=9643.52 00:34:04.535 clat (usec): min=167, max=2102, avg=619.30, stdev=180.81 00:34:04.535 lat (usec): min=177, max=2138, avg=651.10, stdev=182.97 00:34:04.535 clat percentiles (usec): 00:34:04.535 | 1.00th=[ 277], 5.00th=[ 367], 10.00th=[ 408], 20.00th=[ 498], 00:34:04.535 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:34:04.535 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:34:04.535 | 99.00th=[ 963], 99.50th=[ 1647], 99.90th=[ 2114], 99.95th=[ 2114], 00:34:04.535 | 99.99th=[ 2114] 00:34:04.535 bw ( KiB/s): min= 4096, max= 4096, per=41.24%, avg=4096.00, stdev= 0.00, samples=1 00:34:04.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:04.535 lat (usec) : 250=0.35%, 500=11.06%, 750=35.22%, 1000=11.06% 00:34:04.535 lat (msec) : 2=42.12%, 4=0.18% 00:34:04.535 cpu : usr=2.00%, sys=5.00%, ctx=1133, majf=0, minf=1 00:34:04.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.535 issued rwts: total=512,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.535 00:34:04.535 Run status group 0 (all jobs): 00:34:04.535 READ: bw=6110KiB/s (6257kB/s), 63.0KiB/s-2046KiB/s (64.5kB/s-2095kB/s), io=6208KiB (6357kB), run=1001-1016msec 00:34:04.535 WRITE: bw=9933KiB/s (10.2MB/s), 2016KiB/s-2829KiB/s (2064kB/s-2897kB/s), io=9.86MiB (10.3MB), run=1001-1016msec 00:34:04.535 00:34:04.535 Disk stats (read/write): 00:34:04.535 nvme0n1: ios=512/512, merge=0/0, ticks=497/258, in_queue=755, util=87.98% 00:34:04.535 nvme0n2: ios=524/512, merge=0/0, ticks=567/310, in_queue=877, util=91.03% 00:34:04.535 nvme0n3: ios=43/512, merge=0/0, ticks=1238/255, in_queue=1493, util=96.00% 00:34:04.535 nvme0n4: ios=483/512, merge=0/0, ticks=630/251, in_queue=881, util=99.04% 00:34:04.535 11:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:04.535 [global] 00:34:04.535 thread=1 00:34:04.535 invalidate=1 00:34:04.535 rw=write 00:34:04.535 time_based=1 00:34:04.535 runtime=1 00:34:04.536 ioengine=libaio 00:34:04.536 direct=1 00:34:04.536 bs=4096 00:34:04.536 iodepth=128 00:34:04.536 norandommap=0 00:34:04.536 numjobs=1 00:34:04.536 00:34:04.536 verify_dump=1 00:34:04.536 verify_backlog=512 00:34:04.536 verify_state_save=0 00:34:04.536 do_verify=1 00:34:04.536 verify=crc32c-intel 00:34:04.536 [job0] 00:34:04.536 filename=/dev/nvme0n1 00:34:04.536 [job1] 00:34:04.536 filename=/dev/nvme0n2 00:34:04.536 [job2] 00:34:04.536 filename=/dev/nvme0n3 00:34:04.536 [job3] 00:34:04.536 filename=/dev/nvme0n4 00:34:04.536 Could not set queue depth (nvme0n1) 00:34:04.536 Could not set queue depth (nvme0n2) 00:34:04.536 Could not set queue depth (nvme0n3) 00:34:04.536 Could not set queue depth (nvme0n4) 00:34:04.797 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.797 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.797 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.797 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:04.797 fio-3.35 00:34:04.797 Starting 4 threads 00:34:06.183 00:34:06.183 job0: (groupid=0, jobs=1): err= 0: pid=3528316: Wed Nov 6 11:15:57 2024 00:34:06.183 read: IOPS=7603, BW=29.7MiB/s (31.1MB/s)(30.0MiB/1010msec) 00:34:06.183 slat (nsec): min=976, max=14866k, avg=71622.77, stdev=587096.33 00:34:06.183 clat (usec): min=2532, max=54751, avg=9403.45, stdev=5786.66 00:34:06.183 lat (usec): min=2556, max=54784, avg=9475.07, stdev=5835.75 00:34:06.183 clat percentiles (usec): 00:34:06.183 | 1.00th=[ 4178], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6587], 00:34:06.183 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8291], 00:34:06.183 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[13566], 95.00th=[17433], 00:34:06.183 | 99.00th=[43779], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:34:06.183 | 99.99th=[54789] 00:34:06.183 write: IOPS=7645, BW=29.9MiB/s (31.3MB/s)(30.2MiB/1010msec); 0 zone resets 00:34:06.183 slat (nsec): min=1678, max=6928.4k, avg=53178.61, stdev=352764.63 00:34:06.183 clat (usec): min=1163, max=21645, avg=7234.41, stdev=2154.75 00:34:06.183 lat (usec): min=1173, max=21659, avg=7287.59, stdev=2170.38 00:34:06.183 clat percentiles (usec): 00:34:06.183 | 1.00th=[ 2442], 5.00th=[ 3916], 10.00th=[ 4686], 20.00th=[ 5866], 00:34:06.183 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7767], 00:34:06.183 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8979], 95.00th=[10814], 00:34:06.183 | 99.00th=[14222], 99.50th=[20055], 99.90th=[21627], 99.95th=[21627], 00:34:06.183 | 99.99th=[21627] 00:34:06.183 bw ( KiB/s): min=28672, max=32768, per=29.22%, avg=30720.00, stdev=2896.31, samples=2 00:34:06.183 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:34:06.183 lat (msec) : 2=0.22%, 4=2.73%, 10=82.11%, 20=13.21%, 50=1.50% 00:34:06.183 lat (msec) : 100=0.23% 00:34:06.183 cpu : usr=4.86%, sys=7.63%, ctx=640, majf=0, minf=1 00:34:06.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:06.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.183 issued rwts: total=7680,7722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.183 job1: (groupid=0, jobs=1): err= 0: pid=3528330: Wed Nov 6 11:15:57 2024 00:34:06.183 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:34:06.183 slat (nsec): min=953, max=17084k, avg=59172.40, stdev=516588.99 00:34:06.183 clat (usec): min=3309, max=29170, avg=8854.24, stdev=2773.77 00:34:06.183 lat (usec): min=3316, max=31133, avg=8913.41, stdev=2812.72 00:34:06.183 clat percentiles (usec): 00:34:06.183 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 7046], 00:34:06.183 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:34:06.183 | 70.00th=[ 8586], 80.00th=[10814], 90.00th=[12387], 95.00th=[14091], 00:34:06.183 | 99.00th=[19268], 99.50th=[19268], 99.90th=[22938], 99.95th=[22938], 00:34:06.183 | 99.99th=[29230] 00:34:06.183 write: IOPS=7058, BW=27.6MiB/s (28.9MB/s)(27.7MiB/1004msec); 0 zone resets 00:34:06.183 slat (nsec): min=1608, max=14498k, avg=72920.24, stdev=585604.86 00:34:06.183 clat (usec): min=1277, max=66704, avg=9635.62, stdev=8548.73 00:34:06.183 lat (usec): min=1287, max=66712, avg=9708.54, stdev=8596.20 00:34:06.183 clat percentiles (usec): 00:34:06.183 | 1.00th=[ 3949], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 6259], 00:34:06.183 | 30.00th=[ 6915], 40.00th=[ 7177], 50.00th=[ 7832], 60.00th=[ 8160], 00:34:06.183 | 70.00th=[ 8717], 80.00th=[10159], 90.00th=[11600], 95.00th=[21103], 00:34:06.183 | 99.00th=[63701], 99.50th=[65799], 99.90th=[66847], 99.95th=[66847], 00:34:06.183 | 99.99th=[66847] 00:34:06.183 bw ( KiB/s): min=27184, max=28496, per=26.48%, avg=27840.00, stdev=927.72, samples=2 00:34:06.183 iops : min= 6796, max= 7124, avg=6960.00, stdev=231.93, samples=2 00:34:06.183 lat (msec) : 2=0.07%, 4=0.68%, 10=76.13%, 20=20.09%, 50=2.23% 00:34:06.183 lat (msec) : 100=0.81% 00:34:06.184 cpu : usr=4.49%, sys=6.68%, ctx=530, majf=0, minf=1 00:34:06.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:06.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.184 issued rwts: total=6656,7087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.184 job2: (groupid=0, jobs=1): err= 0: pid=3528348: Wed Nov 6 11:15:57 2024 00:34:06.184 read: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec) 00:34:06.184 slat (nsec): min=933, max=9341.0k, avg=73126.29, stdev=588016.23 00:34:06.184 clat (usec): min=2724, max=18888, avg=9367.64, stdev=2490.76 00:34:06.184 lat (usec): min=2727, max=19154, avg=9440.77, stdev=2527.63 00:34:06.184 clat percentiles (usec): 00:34:06.184 | 1.00th=[ 4555], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7177], 00:34:06.184 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:34:06.184 | 70.00th=[10028], 80.00th=[11076], 90.00th=[12911], 95.00th=[14222], 00:34:06.184 | 99.00th=[17171], 99.50th=[17957], 99.90th=[17957], 99.95th=[18482], 00:34:06.184 | 99.99th=[19006] 00:34:06.184 write: IOPS=7504, BW=29.3MiB/s (30.7MB/s)(29.5MiB/1007msec); 0 zone resets 00:34:06.184 slat (nsec): min=1627, max=7573.4k, avg=58529.55, stdev=386565.99 00:34:06.184 clat (usec): min=1177, max=17087, avg=7994.65, stdev=1933.81 00:34:06.184 lat (usec): min=1187, max=17095, avg=8053.18, stdev=1947.44 00:34:06.184 clat percentiles (usec): 00:34:06.184 | 1.00th=[ 2933], 5.00th=[ 5080], 10.00th=[ 5407], 20.00th=[ 6521], 00:34:06.184 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8356], 00:34:06.184 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[11731], 00:34:06.184 | 99.00th=[12387], 99.50th=[15664], 99.90th=[16909], 99.95th=[17171], 00:34:06.184 | 99.99th=[17171] 00:34:06.184 bw ( KiB/s): min=28672, max=30760, per=28.27%, avg=29716.00, stdev=1476.44, samples=2 00:34:06.184 iops : min= 7168, max= 7690, avg=7429.00, stdev=369.11, samples=2 00:34:06.184 lat (msec) : 2=0.16%, 4=1.03%, 10=79.76%, 20=19.06% 00:34:06.184 cpu : usr=5.57%, sys=6.16%, ctx=635, majf=0, minf=1 00:34:06.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:06.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.184 issued rwts: total=7168,7557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.184 job3: (groupid=0, jobs=1): err= 0: pid=3528355: Wed Nov 6 11:15:57 2024 00:34:06.184 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:34:06.184 slat (nsec): min=1036, max=13468k, avg=114177.43, stdev=918977.33 00:34:06.184 clat (usec): min=5593, max=33072, avg=14252.16, stdev=3834.09 00:34:06.184 lat (usec): min=5601, max=33080, avg=14366.34, stdev=3923.12 00:34:06.184 clat percentiles (usec): 00:34:06.184 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[10814], 20.00th=[11863], 00:34:06.184 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12780], 60.00th=[13566], 00:34:06.184 | 70.00th=[14353], 80.00th=[17171], 90.00th=[19792], 95.00th=[22152], 00:34:06.184 | 99.00th=[26608], 99.50th=[31589], 99.90th=[33162], 99.95th=[33162], 00:34:06.184 | 99.99th=[33162] 00:34:06.184 write: IOPS=4139, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1009msec); 0 zone resets 00:34:06.184 slat (nsec): min=1735, max=15020k, avg=122338.45, stdev=787438.25 00:34:06.184 clat (usec): min=3451, max=86289, avg=16141.44, stdev=14115.29 00:34:06.184 lat (usec): min=3459, max=86311, avg=16263.78, stdev=14213.77 00:34:06.184 clat percentiles (usec): 00:34:06.184 | 1.00th=[ 5080], 5.00th=[ 8291], 10.00th=[ 9372], 20.00th=[10159], 00:34:06.184 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11731], 60.00th=[12649], 00:34:06.184 | 70.00th=[13042], 80.00th=[16909], 90.00th=[23200], 95.00th=[57410], 00:34:06.184 | 99.00th=[79168], 99.50th=[80217], 99.90th=[86508], 99.95th=[86508], 00:34:06.184 | 99.99th=[86508] 00:34:06.184 bw ( KiB/s): min=12288, max=20480, per=15.59%, avg=16384.00, stdev=5792.62, samples=2 00:34:06.184 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:34:06.184 lat (msec) : 4=0.13%, 10=11.16%, 20=76.83%, 50=8.90%, 100=2.99% 00:34:06.184 cpu : usr=2.88%, sys=4.27%, ctx=370, majf=0, minf=1 00:34:06.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:06.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.184 issued rwts: total=4096,4177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.184 00:34:06.184 Run status group 0 (all jobs): 00:34:06.184 READ: bw=99.0MiB/s (104MB/s), 15.9MiB/s-29.7MiB/s (16.6MB/s-31.1MB/s), io=100MiB (105MB), run=1004-1010msec 00:34:06.184 WRITE: bw=103MiB/s (108MB/s), 16.2MiB/s-29.9MiB/s (17.0MB/s-31.3MB/s), io=104MiB (109MB), run=1004-1010msec 00:34:06.184 00:34:06.184 Disk stats (read/write): 00:34:06.184 nvme0n1: ios=6676/7159, merge=0/0, ticks=53378/48912, in_queue=102290, util=84.27% 00:34:06.184 nvme0n2: ios=5571/5632, merge=0/0, ticks=48158/44850, in_queue=93008, util=88.58% 00:34:06.184 nvme0n3: ios=5827/6144, merge=0/0, ticks=53006/48382, in_queue=101388, util=94.84% 00:34:06.184 nvme0n4: ios=3632/3711, merge=0/0, ticks=47992/47981, in_queue=95973, util=97.01% 00:34:06.184 11:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:06.184 [global] 00:34:06.184 thread=1 00:34:06.184 invalidate=1 00:34:06.184 rw=randwrite 00:34:06.184 time_based=1 00:34:06.184 runtime=1 00:34:06.184 ioengine=libaio 00:34:06.184 direct=1 00:34:06.184 bs=4096 00:34:06.184 iodepth=128 00:34:06.184 norandommap=0 00:34:06.184 numjobs=1 00:34:06.184 00:34:06.184 verify_dump=1 00:34:06.184 verify_backlog=512 00:34:06.184 verify_state_save=0 00:34:06.184 do_verify=1 00:34:06.184 verify=crc32c-intel 00:34:06.184 [job0] 00:34:06.184 filename=/dev/nvme0n1 00:34:06.184 [job1] 00:34:06.184 filename=/dev/nvme0n2 00:34:06.184 [job2] 00:34:06.184 filename=/dev/nvme0n3 00:34:06.184 [job3] 00:34:06.184 filename=/dev/nvme0n4 00:34:06.184 Could not set queue depth (nvme0n1) 00:34:06.184 Could not set queue depth (nvme0n2) 00:34:06.184 Could not set queue depth (nvme0n3) 00:34:06.184 Could not set queue depth (nvme0n4) 00:34:06.445 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.445 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.445 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.445 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.445 fio-3.35 00:34:06.445 Starting 4 threads 00:34:07.830 00:34:07.830 job0: (groupid=0, jobs=1): err= 0: pid=3528796: Wed Nov 6 11:15:59 2024 00:34:07.830 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:34:07.830 slat (nsec): min=938, max=8834.0k, avg=75963.79, stdev=642486.40 00:34:07.830 clat (usec): min=3138, max=18857, avg=9937.69, stdev=2643.01 00:34:07.830 lat (usec): min=3145, max=21806, avg=10013.66, stdev=2691.15 00:34:07.830 clat percentiles (usec): 00:34:07.830 | 1.00th=[ 3785], 5.00th=[ 5997], 10.00th=[ 7570], 20.00th=[ 8455], 00:34:07.830 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:34:07.830 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[14615], 95.00th=[15926], 00:34:07.830 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:34:07.830 | 99.99th=[18744] 00:34:07.830 write: IOPS=7014, BW=27.4MiB/s (28.7MB/s)(27.6MiB/1009msec); 0 zone resets 00:34:07.830 slat (nsec): min=1542, max=7894.5k, avg=62664.92, stdev=410480.52 00:34:07.830 clat (usec): min=1225, max=17858, avg=8736.54, stdev=2227.86 00:34:07.830 lat (usec): min=1241, max=17881, avg=8799.21, stdev=2245.97 00:34:07.830 clat percentiles (usec): 00:34:07.830 | 1.00th=[ 2638], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 6718], 00:34:07.830 | 30.00th=[ 7767], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9503], 00:34:07.830 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[12780], 00:34:07.830 | 99.00th=[14484], 99.50th=[16057], 99.90th=[17433], 99.95th=[17433], 00:34:07.830 | 99.99th=[17957] 00:34:07.830 bw ( KiB/s): min=26936, max=28672, per=29.60%, avg=27804.00, stdev=1227.54, samples=2 00:34:07.830 iops : min= 6734, max= 7168, avg=6951.00, stdev=306.88, samples=2 00:34:07.830 lat (msec) : 2=0.09%, 4=1.84%, 10=78.02%, 20=20.05% 00:34:07.830 cpu : usr=5.46%, sys=6.94%, ctx=592, majf=0, minf=1 00:34:07.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:07.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:07.830 issued rwts: total=6656,7078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:07.830 job1: (groupid=0, jobs=1): err= 0: pid=3528811: Wed Nov 6 11:15:59 2024 00:34:07.830 read: IOPS=5424, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1003msec) 00:34:07.830 slat (nsec): min=899, max=14999k, avg=84091.84, stdev=510222.15 00:34:07.830 clat (usec): min=781, max=50904, avg=11358.85, stdev=6875.40 00:34:07.830 lat (usec): min=3035, max=50909, avg=11442.94, stdev=6888.77 00:34:07.830 clat percentiles (usec): 00:34:07.830 | 1.00th=[ 5342], 5.00th=[ 7242], 10.00th=[ 7767], 20.00th=[ 8356], 00:34:07.830 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:34:07.830 | 70.00th=[ 9634], 80.00th=[13566], 90.00th=[17957], 95.00th=[22938], 00:34:07.830 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:34:07.830 | 99.99th=[51119] 00:34:07.830 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:34:07.830 slat (nsec): min=1506, max=16649k, avg=93452.26, stdev=672630.50 00:34:07.830 clat (usec): min=5158, max=47565, avg=11557.28, stdev=9078.82 00:34:07.830 lat (usec): min=5162, max=50908, avg=11650.73, stdev=9137.20 00:34:07.830 clat percentiles (usec): 00:34:07.830 | 1.00th=[ 5997], 5.00th=[ 6652], 10.00th=[ 6915], 20.00th=[ 7373], 00:34:07.830 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:34:07.830 | 70.00th=[ 8717], 80.00th=[10945], 90.00th=[27395], 95.00th=[36439], 00:34:07.830 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:34:07.830 | 99.99th=[47449] 00:34:07.830 bw ( KiB/s): min=15864, max=29192, per=23.99%, avg=22528.00, stdev=9424.32, samples=2 00:34:07.830 iops : min= 3966, max= 7298, avg=5632.00, stdev=2356.08, samples=2 00:34:07.830 lat (usec) : 1000=0.01% 00:34:07.830 lat (msec) : 4=0.29%, 10=73.61%, 20=15.68%, 50=10.13%, 100=0.28% 00:34:07.830 cpu : usr=2.50%, sys=3.59%, ctx=514, majf=0, minf=1 00:34:07.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:07.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:07.830 issued rwts: total=5441,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:07.830 job2: (groupid=0, jobs=1): err= 0: pid=3528828: Wed Nov 6 11:15:59 2024 00:34:07.830 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:34:07.830 slat (nsec): min=977, max=19757k, avg=106119.64, stdev=842494.41 00:34:07.830 clat (usec): min=4529, max=48886, avg=13912.87, stdev=8679.78 00:34:07.830 lat (usec): min=4535, max=48913, avg=14018.99, stdev=8757.76 00:34:07.830 clat percentiles (usec): 00:34:07.830 | 1.00th=[ 5800], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 7898], 00:34:07.830 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[10552], 00:34:07.830 | 70.00th=[15008], 80.00th=[23200], 90.00th=[27657], 95.00th=[32113], 00:34:07.830 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:34:07.830 | 99.99th=[49021] 00:34:07.830 write: IOPS=4798, BW=18.7MiB/s (19.7MB/s)(18.9MiB/1008msec); 0 zone resets 00:34:07.830 slat (nsec): min=1583, max=14399k, avg=101721.69, stdev=820625.85 00:34:07.830 clat (usec): min=886, max=47455, avg=13129.34, stdev=7048.57 00:34:07.830 lat (usec): min=3665, max=47482, avg=13231.07, stdev=7128.44 00:34:07.830 clat percentiles (usec): 00:34:07.830 | 1.00th=[ 5538], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 7898], 00:34:07.830 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 9110], 60.00th=[10683], 00:34:07.830 | 70.00th=[17171], 80.00th=[19792], 90.00th=[25297], 95.00th=[28443], 00:34:07.830 | 99.00th=[29754], 99.50th=[30016], 99.90th=[37487], 99.95th=[39584], 00:34:07.830 | 99.99th=[47449] 00:34:07.830 bw ( KiB/s): min=12288, max=25384, per=20.06%, avg=18836.00, stdev=9260.27, samples=2 00:34:07.830 iops : min= 3072, max= 6346, avg=4709.00, stdev=2315.07, samples=2 00:34:07.830 lat (usec) : 1000=0.01% 00:34:07.830 lat (msec) : 4=0.06%, 10=57.66%, 20=19.52%, 50=22.74% 00:34:07.830 cpu : usr=3.08%, sys=5.36%, ctx=278, majf=0, minf=1 00:34:07.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:07.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:07.830 issued rwts: total=4608,4837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:07.830 job3: (groupid=0, jobs=1): err= 0: pid=3528834: Wed Nov 6 11:15:59 2024 00:34:07.830 read: IOPS=5907, BW=23.1MiB/s (24.2MB/s)(23.3MiB/1008msec) 00:34:07.830 slat (nsec): min=1072, max=8960.1k, avg=72285.05, stdev=601832.69 00:34:07.830 clat (usec): min=2997, max=47495, avg=10790.84, stdev=3127.65 00:34:07.830 lat (usec): min=3004, max=47502, avg=10863.12, stdev=3162.36 00:34:07.830 clat percentiles (usec): 00:34:07.830 | 1.00th=[ 4555], 5.00th=[ 6783], 10.00th=[ 7898], 20.00th=[ 8717], 00:34:07.831 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10683], 00:34:07.831 | 70.00th=[11994], 80.00th=[13042], 90.00th=[15270], 95.00th=[16450], 00:34:07.831 | 99.00th=[19006], 99.50th=[20317], 99.90th=[22676], 99.95th=[46400], 00:34:07.831 | 99.99th=[47449] 00:34:07.831 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:34:07.831 slat (nsec): min=1669, max=8398.1k, avg=67964.73, stdev=507748.02 00:34:07.831 clat (usec): min=666, max=33187, avg=10344.14, stdev=5005.12 00:34:07.831 lat (usec): min=678, max=33198, avg=10412.11, stdev=5039.83 00:34:07.831 clat percentiles (usec): 00:34:07.831 | 1.00th=[ 3195], 5.00th=[ 4883], 10.00th=[ 5800], 20.00th=[ 6915], 00:34:07.831 | 30.00th=[ 7767], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10159], 00:34:07.831 | 70.00th=[10945], 80.00th=[13042], 90.00th=[14746], 95.00th=[19530], 00:34:07.831 | 99.00th=[30540], 99.50th=[31589], 99.90th=[33162], 99.95th=[33162], 00:34:07.831 | 99.99th=[33162] 00:34:07.831 bw ( KiB/s): min=23992, max=25160, per=26.17%, avg=24576.00, stdev=825.90, samples=2 00:34:07.831 iops : min= 5998, max= 6290, avg=6144.00, stdev=206.48, samples=2 00:34:07.831 lat (usec) : 750=0.02% 00:34:07.831 lat (msec) : 2=0.12%, 4=2.15%, 10=51.67%, 20=43.28%, 50=2.75% 00:34:07.831 cpu : usr=4.07%, sys=7.65%, ctx=353, majf=0, minf=2 00:34:07.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:07.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:07.831 issued rwts: total=5955,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:07.831 00:34:07.831 Run status group 0 (all jobs): 00:34:07.831 READ: bw=87.7MiB/s (92.0MB/s), 17.9MiB/s-25.8MiB/s (18.7MB/s-27.0MB/s), io=88.5MiB (92.8MB), run=1003-1009msec 00:34:07.831 WRITE: bw=91.7MiB/s (96.2MB/s), 18.7MiB/s-27.4MiB/s (19.7MB/s-28.7MB/s), io=92.5MiB (97.0MB), run=1003-1009msec 00:34:07.831 00:34:07.831 Disk stats (read/write): 00:34:07.831 nvme0n1: ios=5682/5719, merge=0/0, ticks=53256/47188, in_queue=100444, util=86.17% 00:34:07.831 nvme0n2: ios=4498/4608, merge=0/0, ticks=11385/13744, in_queue=25129, util=91.59% 00:34:07.831 nvme0n3: ios=3920/4096, merge=0/0, ticks=26754/22782, in_queue=49536, util=100.00% 00:34:07.831 nvme0n4: ios=4695/5120, merge=0/0, ticks=48817/52140, in_queue=100957, util=97.74% 00:34:07.831 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:07.831 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3529106 00:34:07.831 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:07.831 11:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:07.831 [global] 00:34:07.831 thread=1 00:34:07.831 invalidate=1 00:34:07.831 rw=read 00:34:07.831 time_based=1 00:34:07.831 runtime=10 00:34:07.831 ioengine=libaio 00:34:07.831 direct=1 00:34:07.831 bs=4096 00:34:07.831 iodepth=1 00:34:07.831 norandommap=1 00:34:07.831 numjobs=1 00:34:07.831 00:34:07.831 [job0] 00:34:07.831 filename=/dev/nvme0n1 00:34:07.831 [job1] 00:34:07.831 filename=/dev/nvme0n2 00:34:07.831 [job2] 00:34:07.831 filename=/dev/nvme0n3 00:34:07.831 [job3] 00:34:07.831 filename=/dev/nvme0n4 00:34:07.831 Could not set queue depth (nvme0n1) 00:34:07.831 Could not set queue depth (nvme0n2) 00:34:07.831 Could not set queue depth (nvme0n3) 00:34:07.831 Could not set queue depth (nvme0n4) 00:34:08.091 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.091 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.091 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.091 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.091 fio-3.35 00:34:08.091 Starting 4 threads 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:11.396 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=733184, buflen=4096 00:34:11.396 fio: pid=3529324, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:11.396 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:34:11.396 fio: pid=3529322, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.396 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10817536, buflen=4096 00:34:11.396 fio: pid=3529311, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:11.396 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12660736, buflen=4096 00:34:11.396 fio: pid=3529316, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.396 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:11.658 00:34:11.658 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3529311: Wed Nov 6 11:16:02 2024 00:34:11.658 read: IOPS=893, BW=3574KiB/s (3660kB/s)(10.3MiB/2956msec) 00:34:11.658 slat (usec): min=6, max=32970, avg=40.83, stdev=665.04 00:34:11.658 clat (usec): min=557, max=42112, avg=1063.96, stdev=1531.96 00:34:11.658 lat (usec): min=582, max=42152, avg=1104.80, stdev=1669.55 00:34:11.658 clat percentiles (usec): 00:34:11.658 | 1.00th=[ 750], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 922], 00:34:11.658 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:34:11.658 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1172], 00:34:11.658 | 99.00th=[ 1287], 99.50th=[ 1336], 99.90th=[41681], 99.95th=[42206], 00:34:11.658 | 99.99th=[42206] 00:34:11.658 bw ( KiB/s): min= 2696, max= 3880, per=47.42%, avg=3611.20, stdev=512.64, samples=5 00:34:11.658 iops : min= 674, max= 970, avg=902.80, stdev=128.16, samples=5 00:34:11.658 lat (usec) : 750=1.14%, 1000=44.78% 00:34:11.658 lat (msec) : 2=53.90%, 50=0.15% 00:34:11.658 cpu : usr=1.02%, sys=2.57%, ctx=2644, majf=0, minf=2 00:34:11.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 issued rwts: total=2642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.658 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3529316: Wed Nov 6 11:16:02 2024 00:34:11.658 read: IOPS=984, BW=3938KiB/s (4032kB/s)(12.1MiB/3140msec) 00:34:11.658 slat (usec): min=6, max=12081, avg=38.56, stdev=323.55 00:34:11.658 clat (usec): min=252, max=1561, avg=962.94, stdev=107.19 00:34:11.658 lat (usec): min=262, max=13003, avg=1001.51, stdev=341.77 00:34:11.658 clat percentiles (usec): 00:34:11.658 | 1.00th=[ 611], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 898], 00:34:11.658 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:34:11.658 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:34:11.658 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1303], 99.95th=[ 1369], 00:34:11.658 | 99.99th=[ 1565] 00:34:11.658 bw ( KiB/s): min= 3773, max= 4144, per=52.15%, avg=3971.50, stdev=128.30, samples=6 00:34:11.658 iops : min= 943, max= 1036, avg=992.83, stdev=32.15, samples=6 00:34:11.658 lat (usec) : 500=0.19%, 750=3.82%, 1000=58.89% 00:34:11.658 lat (msec) : 2=37.06% 00:34:11.658 cpu : usr=1.50%, sys=4.21%, ctx=3098, majf=0, minf=2 00:34:11.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 issued rwts: total=3092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.658 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3529322: Wed Nov 6 11:16:02 2024 00:34:11.658 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(268KiB/2795msec) 00:34:11.658 slat (usec): min=25, max=220, avg=28.81, stdev=23.63 00:34:11.658 clat (usec): min=1088, max=42240, avg=41347.58, stdev=4994.68 00:34:11.658 lat (usec): min=1124, max=42266, avg=41376.43, stdev=4993.73 00:34:11.658 clat percentiles (usec): 00:34:11.658 | 1.00th=[ 1090], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:11.658 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:11.658 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:11.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:11.658 | 99.99th=[42206] 00:34:11.658 bw ( KiB/s): min= 96, max= 96, per=1.26%, avg=96.00, stdev= 0.00, samples=5 00:34:11.658 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:11.658 lat (msec) : 2=1.47%, 50=97.06% 00:34:11.658 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:34:11.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.658 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3529324: Wed Nov 6 11:16:02 2024 00:34:11.658 read: IOPS=68, BW=273KiB/s (280kB/s)(716KiB/2621msec) 00:34:11.658 slat (nsec): min=16479, max=46309, avg=25267.98, stdev=4083.36 00:34:11.658 clat (usec): min=785, max=42310, avg=14486.05, stdev=19076.23 00:34:11.658 lat (usec): min=825, max=42339, avg=14511.37, stdev=19075.86 00:34:11.658 clat percentiles (usec): 00:34:11.658 | 1.00th=[ 816], 5.00th=[ 988], 10.00th=[ 1037], 20.00th=[ 1090], 00:34:11.658 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1221], 60.00th=[ 1336], 00:34:11.658 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:11.658 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:11.658 | 99.99th=[42206] 00:34:11.658 bw ( KiB/s): min= 96, max= 576, per=3.48%, avg=265.60, stdev=196.89, samples=5 00:34:11.658 iops : min= 24, max= 144, avg=66.40, stdev=49.22, samples=5 00:34:11.658 lat (usec) : 1000=5.00% 00:34:11.658 lat (msec) : 2=61.67%, 50=32.78% 00:34:11.658 cpu : usr=0.08%, sys=0.15%, ctx=180, majf=0, minf=1 00:34:11.658 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.658 issued rwts: total=180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.658 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.658 00:34:11.658 Run status group 0 (all jobs): 00:34:11.658 READ: bw=7615KiB/s (7798kB/s), 95.9KiB/s-3938KiB/s (98.2kB/s-4032kB/s), io=23.4MiB (24.5MB), run=2621-3140msec 00:34:11.658 00:34:11.658 Disk stats (read/write): 00:34:11.658 nvme0n1: ios=2548/0, merge=0/0, ticks=2744/0, in_queue=2744, util=93.39% 00:34:11.658 nvme0n2: ios=3085/0, merge=0/0, ticks=3553/0, in_queue=3553, util=98.36% 00:34:11.658 nvme0n3: ios=62/0, merge=0/0, ticks=2562/0, in_queue=2562, util=96.03% 00:34:11.658 nvme0n4: ios=178/0, merge=0/0, ticks=2549/0, in_queue=2549, util=96.42% 00:34:11.658 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.658 11:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:11.919 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.919 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:12.179 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:12.179 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:12.179 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:12.179 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3529106 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:12.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:12.440 nvmf hotplug test: fio failed as expected 00:34:12.440 11:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.701 rmmod nvme_tcp 00:34:12.701 rmmod nvme_fabrics 00:34:12.701 rmmod nvme_keyring 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3525926 ']' 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3525926 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3525926 ']' 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3525926 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:12.701 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3525926 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3525926' 00:34:12.962 killing process with pid 3525926 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3525926 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3525926 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.962 11:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.510 00:34:15.510 real 0m27.715s 00:34:15.510 user 2m14.849s 00:34:15.510 sys 0m12.225s 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.510 ************************************ 00:34:15.510 END TEST nvmf_fio_target 00:34:15.510 ************************************ 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:15.510 ************************************ 00:34:15.510 START TEST nvmf_bdevio 00:34:15.510 ************************************ 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:15.510 * Looking for test storage... 00:34:15.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.510 --rc genhtml_branch_coverage=1 00:34:15.510 --rc genhtml_function_coverage=1 00:34:15.510 --rc genhtml_legend=1 00:34:15.510 --rc geninfo_all_blocks=1 00:34:15.510 --rc geninfo_unexecuted_blocks=1 00:34:15.510 00:34:15.510 ' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.510 --rc genhtml_branch_coverage=1 00:34:15.510 --rc genhtml_function_coverage=1 00:34:15.510 --rc genhtml_legend=1 00:34:15.510 --rc geninfo_all_blocks=1 00:34:15.510 --rc geninfo_unexecuted_blocks=1 00:34:15.510 00:34:15.510 ' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.510 --rc genhtml_branch_coverage=1 00:34:15.510 --rc genhtml_function_coverage=1 00:34:15.510 --rc genhtml_legend=1 00:34:15.510 --rc geninfo_all_blocks=1 00:34:15.510 --rc geninfo_unexecuted_blocks=1 00:34:15.510 00:34:15.510 ' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.510 --rc genhtml_branch_coverage=1 00:34:15.510 --rc genhtml_function_coverage=1 00:34:15.510 --rc genhtml_legend=1 00:34:15.510 --rc geninfo_all_blocks=1 00:34:15.510 --rc geninfo_unexecuted_blocks=1 00:34:15.510 00:34:15.510 ' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.510 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:15.511 11:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.654 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:23.655 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:23.655 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:23.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:23.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:23.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:34:23.655 00:34:23.655 --- 10.0.0.2 ping statistics --- 00:34:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.655 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:23.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:34:23.655 00:34:23.655 --- 10.0.0.1 ping statistics --- 00:34:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.655 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3534323 00:34:23.655 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3534323 00:34:23.656 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3534323 ']' 00:34:23.656 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.656 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:23.656 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.656 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:23.656 11:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.656 [2024-11-06 11:16:13.893278] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:23.656 [2024-11-06 11:16:13.894014] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:34:23.656 [2024-11-06 11:16:13.894043] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.656 [2024-11-06 11:16:13.987365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:23.656 [2024-11-06 11:16:14.023941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.656 [2024-11-06 11:16:14.023972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.656 [2024-11-06 11:16:14.023979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.656 [2024-11-06 11:16:14.023987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.656 [2024-11-06 11:16:14.023993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.656 [2024-11-06 11:16:14.025697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:23.656 [2024-11-06 11:16:14.025716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:23.656 [2024-11-06 11:16:14.025859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:23.656 [2024-11-06 11:16:14.025976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:23.656 [2024-11-06 11:16:14.081663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:23.656 [2024-11-06 11:16:14.082688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:23.656 [2024-11-06 11:16:14.083323] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:23.656 [2024-11-06 11:16:14.083465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:23.656 [2024-11-06 11:16:14.083590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.656 [2024-11-06 11:16:14.750768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.656 Malloc0 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.656 [2024-11-06 11:16:14.838986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:23.656 { 00:34:23.656 "params": { 00:34:23.656 "name": "Nvme$subsystem", 00:34:23.656 "trtype": "$TEST_TRANSPORT", 00:34:23.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.656 "adrfam": "ipv4", 00:34:23.656 "trsvcid": "$NVMF_PORT", 00:34:23.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.656 "hdgst": ${hdgst:-false}, 00:34:23.656 "ddgst": ${ddgst:-false} 00:34:23.656 }, 00:34:23.656 "method": "bdev_nvme_attach_controller" 00:34:23.656 } 00:34:23.656 EOF 00:34:23.656 )") 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:23.656 11:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:23.656 "params": { 00:34:23.656 "name": "Nvme1", 00:34:23.656 "trtype": "tcp", 00:34:23.656 "traddr": "10.0.0.2", 00:34:23.656 "adrfam": "ipv4", 00:34:23.656 "trsvcid": "4420", 00:34:23.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.656 "hdgst": false, 00:34:23.656 "ddgst": false 00:34:23.656 }, 00:34:23.656 "method": "bdev_nvme_attach_controller" 00:34:23.656 }' 00:34:23.656 [2024-11-06 11:16:14.893593] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:34:23.656 [2024-11-06 11:16:14.893652] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534670 ] 00:34:23.656 [2024-11-06 11:16:14.965951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:23.656 [2024-11-06 11:16:15.005197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.656 [2024-11-06 11:16:15.005312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:23.656 [2024-11-06 11:16:15.005315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.917 I/O targets: 00:34:23.917 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:23.917 00:34:23.917 00:34:23.917 CUnit - A unit testing framework for C - Version 2.1-3 00:34:23.917 http://cunit.sourceforge.net/ 00:34:23.917 00:34:23.917 00:34:23.917 Suite: bdevio tests on: Nvme1n1 00:34:24.178 Test: blockdev write read block ...passed 00:34:24.178 Test: blockdev write zeroes read block ...passed 00:34:24.178 Test: blockdev write zeroes read no split ...passed 00:34:24.178 Test: blockdev write zeroes read split ...passed 00:34:24.178 Test: blockdev write zeroes read split partial ...passed 00:34:24.178 Test: blockdev reset ...[2024-11-06 11:16:15.468102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:24.178 [2024-11-06 11:16:15.468164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c3970 (9): Bad file descriptor 00:34:24.178 [2024-11-06 11:16:15.472446] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:24.178 passed 00:34:24.178 Test: blockdev write read 8 blocks ...passed 00:34:24.178 Test: blockdev write read size > 128k ...passed 00:34:24.178 Test: blockdev write read invalid size ...passed 00:34:24.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:24.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:24.178 Test: blockdev write read max offset ...passed 00:34:24.440 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:24.440 Test: blockdev writev readv 8 blocks ...passed 00:34:24.440 Test: blockdev writev readv 30 x 1block ...passed 00:34:24.440 Test: blockdev writev readv block ...passed 00:34:24.440 Test: blockdev writev readv size > 128k ...passed 00:34:24.440 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:24.440 Test: blockdev comparev and writev ...[2024-11-06 11:16:15.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.739985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.739996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.740002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.740487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.740506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.740512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.741054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.741064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.741071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.741609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.741618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.741627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.440 [2024-11-06 11:16:15.741632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:24.440 passed 00:34:24.440 Test: blockdev nvme passthru rw ...passed 00:34:24.440 Test: blockdev nvme passthru vendor specific ...[2024-11-06 11:16:15.826639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.440 [2024-11-06 11:16:15.826652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.826977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.440 [2024-11-06 11:16:15.826985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.827312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.440 [2024-11-06 11:16:15.827320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:24.440 [2024-11-06 11:16:15.827650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.440 [2024-11-06 11:16:15.827659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:24.440 passed 00:34:24.440 Test: blockdev nvme admin passthru ...passed 00:34:24.701 Test: blockdev copy ...passed 00:34:24.701 00:34:24.701 Run Summary: Type Total Ran Passed Failed Inactive 00:34:24.701 suites 1 1 n/a 0 0 00:34:24.701 tests 23 23 23 0 0 00:34:24.701 asserts 152 152 152 0 n/a 00:34:24.701 00:34:24.701 Elapsed time = 1.160 seconds 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.701 11:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.701 rmmod nvme_tcp 00:34:24.701 rmmod nvme_fabrics 00:34:24.701 rmmod nvme_keyring 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3534323 ']' 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3534323 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3534323 ']' 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3534323 00:34:24.701 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:34:24.702 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:24.702 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3534323 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3534323' 00:34:24.962 killing process with pid 3534323 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3534323 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3534323 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.962 11:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.510 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.510 00:34:27.510 real 0m11.909s 00:34:27.510 user 0m9.762s 00:34:27.510 sys 0m6.198s 00:34:27.510 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:27.510 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:27.510 ************************************ 00:34:27.510 END TEST nvmf_bdevio 00:34:27.510 ************************************ 00:34:27.510 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:27.510 00:34:27.510 real 4m54.603s 00:34:27.510 user 10m8.526s 00:34:27.510 sys 2m0.074s 00:34:27.510 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:27.510 11:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:27.510 ************************************ 00:34:27.510 END TEST nvmf_target_core_interrupt_mode 00:34:27.510 ************************************ 00:34:27.510 11:16:18 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:27.510 11:16:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:27.510 11:16:18 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:27.510 11:16:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.510 ************************************ 00:34:27.510 START TEST nvmf_interrupt 00:34:27.510 ************************************ 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:27.510 * Looking for test storage... 00:34:27.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:27.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.510 --rc genhtml_branch_coverage=1 00:34:27.510 --rc genhtml_function_coverage=1 00:34:27.510 --rc genhtml_legend=1 00:34:27.510 --rc geninfo_all_blocks=1 00:34:27.510 --rc geninfo_unexecuted_blocks=1 00:34:27.510 00:34:27.510 ' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:27.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.510 --rc genhtml_branch_coverage=1 00:34:27.510 --rc genhtml_function_coverage=1 00:34:27.510 --rc genhtml_legend=1 00:34:27.510 --rc geninfo_all_blocks=1 00:34:27.510 --rc geninfo_unexecuted_blocks=1 00:34:27.510 00:34:27.510 ' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:27.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.510 --rc genhtml_branch_coverage=1 00:34:27.510 --rc genhtml_function_coverage=1 00:34:27.510 --rc genhtml_legend=1 00:34:27.510 --rc geninfo_all_blocks=1 00:34:27.510 --rc geninfo_unexecuted_blocks=1 00:34:27.510 00:34:27.510 ' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:27.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.510 --rc genhtml_branch_coverage=1 00:34:27.510 --rc genhtml_function_coverage=1 00:34:27.510 --rc genhtml_legend=1 00:34:27.510 --rc geninfo_all_blocks=1 00:34:27.510 --rc geninfo_unexecuted_blocks=1 00:34:27.510 00:34:27.510 ' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:27.510 11:16:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.511 11:16:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:34.219 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:34.219 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:34.219 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:34.219 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:34.219 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:34.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:34:34.480 00:34:34.480 --- 10.0.0.2 ping statistics --- 00:34:34.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.480 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:34:34.480 00:34:34.480 --- 10.0.0.1 ping statistics --- 00:34:34.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.480 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3539029 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3539029 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3539029 ']' 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:34.480 11:16:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:34.480 [2024-11-06 11:16:25.801988] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:34.480 [2024-11-06 11:16:25.802964] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:34:34.480 [2024-11-06 11:16:25.803001] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.480 [2024-11-06 11:16:25.879048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:34.740 [2024-11-06 11:16:25.914244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.740 [2024-11-06 11:16:25.914276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.740 [2024-11-06 11:16:25.914284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.740 [2024-11-06 11:16:25.914290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.740 [2024-11-06 11:16:25.914296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.740 [2024-11-06 11:16:25.915442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.740 [2024-11-06 11:16:25.915444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.740 [2024-11-06 11:16:25.970326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:34.740 [2024-11-06 11:16:25.970770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:34.740 [2024-11-06 11:16:25.971136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:35.310 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:35.311 5000+0 records in 00:34:35.311 5000+0 records out 00:34:35.311 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186368 s, 549 MB/s 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.311 AIO0 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.311 [2024-11-06 11:16:26.711997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.311 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.572 [2024-11-06 11:16:26.752408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3539029 0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3539029 0 idle 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539029 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.22 reactor_0' 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539029 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.22 reactor_0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3539029 1 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3539029 1 idle 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:35.572 11:16:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539035 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539035 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3539325 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3539029 0 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3539029 0 busy 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:35.833 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539029 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.44 reactor_0' 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539029 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.44 reactor_0 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3539029 1 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3539029 1 busy 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539035 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.29 reactor_1' 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539035 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.29 reactor_1 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.094 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.355 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:36.355 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:36.355 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:36.355 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:36.355 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:36.355 11:16:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.355 11:16:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3539325 00:34:46.350 Initializing NVMe Controllers 00:34:46.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:46.350 Controller IO queue size 256, less than required. 00:34:46.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:46.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:46.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:46.350 Initialization complete. Launching workers. 00:34:46.350 ======================================================== 00:34:46.350 Latency(us) 00:34:46.350 Device Information : IOPS MiB/s Average min max 00:34:46.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16637.95 64.99 15395.70 3002.04 18886.06 00:34:46.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20056.84 78.35 12765.17 7837.75 29279.72 00:34:46.350 ======================================================== 00:34:46.350 Total : 36694.79 143.34 13957.89 3002.04 29279.72 00:34:46.350 00:34:46.350 [2024-11-06 11:16:37.251771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f97720 is same with the state(6) to be set 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3539029 0 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3539029 0 idle 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539029 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.24 reactor_0' 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539029 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.24 reactor_0 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3539029 1 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3539029 1 idle 00:34:46.350 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539035 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539035 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:46.351 11:16:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:46.919 11:16:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:46.919 11:16:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:34:46.919 11:16:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:46.919 11:16:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:46.919 11:16:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:34:48.833 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:48.833 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:48.833 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3539029 0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3539029 0 idle 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539029 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.49 reactor_0' 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539029 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.49 reactor_0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3539029 1 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3539029 1 idle 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3539029 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3539029 -w 256 00:34:49.094 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3539035 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3539035 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:49.355 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:49.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.615 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.616 rmmod nvme_tcp 00:34:49.616 rmmod nvme_fabrics 00:34:49.616 rmmod nvme_keyring 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3539029 ']' 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3539029 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3539029 ']' 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3539029 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3539029 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3539029' 00:34:49.616 killing process with pid 3539029 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3539029 00:34:49.616 11:16:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3539029 00:34:49.877 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.877 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:49.877 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:49.877 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:49.877 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:49.877 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:49.877 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:49.878 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:49.878 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:49.878 11:16:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.878 11:16:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:49.878 11:16:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.794 11:16:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:52.055 00:34:52.055 real 0m24.732s 00:34:52.055 user 0m40.197s 00:34:52.055 sys 0m9.038s 00:34:52.055 11:16:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:52.055 11:16:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:52.055 ************************************ 00:34:52.055 END TEST nvmf_interrupt 00:34:52.055 ************************************ 00:34:52.055 00:34:52.055 real 29m46.083s 00:34:52.055 user 61m7.846s 00:34:52.055 sys 9m55.487s 00:34:52.055 11:16:43 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:52.055 11:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.055 ************************************ 00:34:52.055 END TEST nvmf_tcp 00:34:52.055 ************************************ 00:34:52.055 11:16:43 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:52.055 11:16:43 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:52.055 11:16:43 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:52.055 11:16:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:52.055 11:16:43 -- common/autotest_common.sh@10 -- # set +x 00:34:52.055 ************************************ 00:34:52.055 START TEST spdkcli_nvmf_tcp 00:34:52.055 ************************************ 00:34:52.055 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:52.055 * Looking for test storage... 00:34:52.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:52.055 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:52.055 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:34:52.055 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.317 --rc genhtml_branch_coverage=1 00:34:52.317 --rc genhtml_function_coverage=1 00:34:52.317 --rc genhtml_legend=1 00:34:52.317 --rc geninfo_all_blocks=1 00:34:52.317 --rc geninfo_unexecuted_blocks=1 00:34:52.317 00:34:52.317 ' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.317 --rc genhtml_branch_coverage=1 00:34:52.317 --rc genhtml_function_coverage=1 00:34:52.317 --rc genhtml_legend=1 00:34:52.317 --rc geninfo_all_blocks=1 00:34:52.317 --rc geninfo_unexecuted_blocks=1 00:34:52.317 00:34:52.317 ' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.317 --rc genhtml_branch_coverage=1 00:34:52.317 --rc genhtml_function_coverage=1 00:34:52.317 --rc genhtml_legend=1 00:34:52.317 --rc geninfo_all_blocks=1 00:34:52.317 --rc geninfo_unexecuted_blocks=1 00:34:52.317 00:34:52.317 ' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:52.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.317 --rc genhtml_branch_coverage=1 00:34:52.317 --rc genhtml_function_coverage=1 00:34:52.317 --rc genhtml_legend=1 00:34:52.317 --rc geninfo_all_blocks=1 00:34:52.317 --rc geninfo_unexecuted_blocks=1 00:34:52.317 00:34:52.317 ' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.317 11:16:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:52.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3542592 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3542592 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3542592 ']' 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:52.318 11:16:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.318 [2024-11-06 11:16:43.622175] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:34:52.318 [2024-11-06 11:16:43.622227] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542592 ] 00:34:52.318 [2024-11-06 11:16:43.693407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:52.318 [2024-11-06 11:16:43.731679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.318 [2024-11-06 11:16:43.731681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.261 11:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:53.261 11:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:34:53.261 11:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:53.261 11:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:53.261 11:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.261 11:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:53.261 11:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:53.262 11:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:53.262 11:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.262 11:16:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.262 11:16:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:53.262 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:53.262 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:53.262 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:53.262 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:53.262 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:53.262 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:53.262 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:53.262 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:53.262 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:53.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:53.262 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:53.262 ' 00:34:55.811 [2024-11-06 11:16:46.876469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.754 [2024-11-06 11:16:48.084417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:59.297 [2024-11-06 11:16:50.303017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:01.208 [2024-11-06 11:16:52.208682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:02.591 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:02.591 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:02.591 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:02.591 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:02.591 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:02.591 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:02.591 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:02.591 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.591 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.591 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:02.591 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:02.591 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:02.591 11:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:02.591 11:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.591 11:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.591 11:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:02.592 11:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:02.592 11:16:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.592 11:16:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:02.592 11:16:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:02.852 11:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:02.852 11:16:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:02.852 11:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:02.852 11:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.852 11:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.113 11:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:03.113 11:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:03.113 11:16:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.113 11:16:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:03.113 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:03.113 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:03.113 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:03.113 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:03.113 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:03.113 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:03.113 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:03.113 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:03.113 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:03.113 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:03.113 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:03.113 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:03.113 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:03.113 ' 00:35:08.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:08.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:08.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:08.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:08.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:08.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:08.397 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:08.397 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:08.397 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:08.397 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:08.397 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:08.397 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:08.397 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:08.397 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3542592 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3542592 ']' 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3542592 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3542592 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3542592' 00:35:08.397 killing process with pid 3542592 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3542592 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3542592 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3542592 ']' 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3542592 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3542592 ']' 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3542592 00:35:08.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3542592) - No such process 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3542592 is not found' 00:35:08.397 Process with pid 3542592 is not found 00:35:08.397 11:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:08.398 11:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:08.398 11:16:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:08.398 00:35:08.398 real 0m16.225s 00:35:08.398 user 0m33.631s 00:35:08.398 sys 0m0.693s 00:35:08.398 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.398 11:16:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.398 ************************************ 00:35:08.398 END TEST spdkcli_nvmf_tcp 00:35:08.398 ************************************ 00:35:08.398 11:16:59 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:08.398 11:16:59 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:08.398 11:16:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:08.398 11:16:59 -- common/autotest_common.sh@10 -- # set +x 00:35:08.398 ************************************ 00:35:08.398 START TEST nvmf_identify_passthru 00:35:08.398 ************************************ 00:35:08.398 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:08.398 * Looking for test storage... 00:35:08.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:08.398 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:08.398 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:08.398 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:08.659 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:08.659 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.659 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.659 --rc genhtml_branch_coverage=1 00:35:08.659 --rc genhtml_function_coverage=1 00:35:08.659 --rc genhtml_legend=1 00:35:08.659 --rc geninfo_all_blocks=1 00:35:08.659 --rc geninfo_unexecuted_blocks=1 00:35:08.659 00:35:08.659 ' 00:35:08.659 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.659 --rc genhtml_branch_coverage=1 00:35:08.659 --rc genhtml_function_coverage=1 00:35:08.659 --rc genhtml_legend=1 00:35:08.659 --rc geninfo_all_blocks=1 00:35:08.659 --rc geninfo_unexecuted_blocks=1 00:35:08.659 00:35:08.659 ' 00:35:08.659 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.659 --rc genhtml_branch_coverage=1 00:35:08.659 --rc genhtml_function_coverage=1 00:35:08.659 --rc genhtml_legend=1 00:35:08.659 --rc geninfo_all_blocks=1 00:35:08.659 --rc geninfo_unexecuted_blocks=1 00:35:08.659 00:35:08.659 ' 00:35:08.659 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:08.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.659 --rc genhtml_branch_coverage=1 00:35:08.659 --rc genhtml_function_coverage=1 00:35:08.659 --rc genhtml_legend=1 00:35:08.659 --rc geninfo_all_blocks=1 00:35:08.659 --rc geninfo_unexecuted_blocks=1 00:35:08.659 00:35:08.659 ' 00:35:08.659 11:16:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.659 11:16:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.659 11:16:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.659 11:16:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.659 11:16:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:08.659 11:16:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:08.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.659 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.659 11:16:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.659 11:16:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.660 11:16:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.660 11:16:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.660 11:16:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.660 11:16:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:08.660 11:16:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.660 11:16:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.660 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:08.660 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:08.660 11:16:59 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:08.660 11:16:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:16.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:16.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:16.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:16.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.796 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.797 11:17:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:35:16.797 00:35:16.797 --- 10.0.0.2 ping statistics --- 00:35:16.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.797 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:35:16.797 00:35:16.797 --- 10.0.0.1 ping statistics --- 00:35:16.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.797 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.797 11:17:07 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:16.797 11:17:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:16.797 11:17:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:17.059 11:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:17.059 11:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 11:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 11:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3549606 00:35:17.059 11:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:17.059 11:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:17.059 11:17:08 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3549606 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3549606 ']' 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:17.059 11:17:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 [2024-11-06 11:17:08.381644] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:35:17.059 [2024-11-06 11:17:08.381704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.059 [2024-11-06 11:17:08.461010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:17.320 [2024-11-06 11:17:08.501049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.320 [2024-11-06 11:17:08.501087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.320 [2024-11-06 11:17:08.501095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.320 [2024-11-06 11:17:08.501101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.320 [2024-11-06 11:17:08.501107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.320 [2024-11-06 11:17:08.502870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.320 [2024-11-06 11:17:08.502988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:17.320 [2024-11-06 11:17:08.503146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.320 [2024-11-06 11:17:08.503146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:17.891 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:17.891 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:35:17.891 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:17.891 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.891 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.891 INFO: Log level set to 20 00:35:17.891 INFO: Requests: 00:35:17.891 { 00:35:17.891 "jsonrpc": "2.0", 00:35:17.891 "method": "nvmf_set_config", 00:35:17.891 "id": 1, 00:35:17.891 "params": { 00:35:17.891 "admin_cmd_passthru": { 00:35:17.891 "identify_ctrlr": true 00:35:17.891 } 00:35:17.891 } 00:35:17.891 } 00:35:17.891 00:35:17.891 INFO: response: 00:35:17.891 { 00:35:17.891 "jsonrpc": "2.0", 00:35:17.891 "id": 1, 00:35:17.891 "result": true 00:35:17.891 } 00:35:17.891 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.892 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.892 INFO: Setting log level to 20 00:35:17.892 INFO: Setting log level to 20 00:35:17.892 INFO: Log level set to 20 00:35:17.892 INFO: Log level set to 20 00:35:17.892 INFO: Requests: 00:35:17.892 { 00:35:17.892 "jsonrpc": "2.0", 00:35:17.892 "method": "framework_start_init", 00:35:17.892 "id": 1 00:35:17.892 } 00:35:17.892 00:35:17.892 INFO: Requests: 00:35:17.892 { 00:35:17.892 "jsonrpc": "2.0", 00:35:17.892 "method": "framework_start_init", 00:35:17.892 "id": 1 00:35:17.892 } 00:35:17.892 00:35:17.892 [2024-11-06 11:17:09.255569] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:17.892 INFO: response: 00:35:17.892 { 00:35:17.892 "jsonrpc": "2.0", 00:35:17.892 "id": 1, 00:35:17.892 "result": true 00:35:17.892 } 00:35:17.892 00:35:17.892 INFO: response: 00:35:17.892 { 00:35:17.892 "jsonrpc": "2.0", 00:35:17.892 "id": 1, 00:35:17.892 "result": true 00:35:17.892 } 00:35:17.892 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.892 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.892 INFO: Setting log level to 40 00:35:17.892 INFO: Setting log level to 40 00:35:17.892 INFO: Setting log level to 40 00:35:17.892 [2024-11-06 11:17:09.268904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.892 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:17.892 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.152 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:18.152 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.152 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.419 Nvme0n1 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.419 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.419 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.419 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.419 [2024-11-06 11:17:09.669054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.419 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.419 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.419 [ 00:35:18.419 { 00:35:18.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:18.419 "subtype": "Discovery", 00:35:18.419 "listen_addresses": [], 00:35:18.419 "allow_any_host": true, 00:35:18.419 "hosts": [] 00:35:18.419 }, 00:35:18.419 { 00:35:18.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.419 "subtype": "NVMe", 00:35:18.419 "listen_addresses": [ 00:35:18.419 { 00:35:18.419 "trtype": "TCP", 00:35:18.419 "adrfam": "IPv4", 00:35:18.419 "traddr": "10.0.0.2", 00:35:18.419 "trsvcid": "4420" 00:35:18.419 } 00:35:18.419 ], 00:35:18.419 "allow_any_host": true, 00:35:18.419 "hosts": [], 00:35:18.419 "serial_number": "SPDK00000000000001", 00:35:18.419 "model_number": "SPDK bdev Controller", 00:35:18.419 "max_namespaces": 1, 00:35:18.419 "min_cntlid": 1, 00:35:18.419 "max_cntlid": 65519, 00:35:18.419 "namespaces": [ 00:35:18.419 { 00:35:18.419 "nsid": 1, 00:35:18.419 "bdev_name": "Nvme0n1", 00:35:18.419 "name": "Nvme0n1", 00:35:18.419 "nguid": "36344730526054870025384500000044", 00:35:18.419 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:18.419 } 00:35:18.419 ] 00:35:18.420 } 00:35:18.420 ] 00:35:18.420 11:17:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.420 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:18.420 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:18.420 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:18.681 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:18.681 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:18.681 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:18.681 11:17:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:18.942 11:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:18.942 11:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:18.942 11:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:18.942 11:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.942 11:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:18.942 11:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.942 rmmod nvme_tcp 00:35:18.942 rmmod nvme_fabrics 00:35:18.942 rmmod nvme_keyring 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3549606 ']' 00:35:18.942 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3549606 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3549606 ']' 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3549606 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3549606 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3549606' 00:35:18.942 killing process with pid 3549606 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3549606 00:35:18.942 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3549606 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.203 11:17:10 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.203 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.203 11:17:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.747 11:17:12 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.747 00:35:21.747 real 0m12.981s 00:35:21.747 user 0m10.372s 00:35:21.747 sys 0m6.550s 00:35:21.747 11:17:12 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:21.747 11:17:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.747 ************************************ 00:35:21.747 END TEST nvmf_identify_passthru 00:35:21.747 ************************************ 00:35:21.747 11:17:12 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:21.747 11:17:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:21.747 11:17:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:21.747 11:17:12 -- common/autotest_common.sh@10 -- # set +x 00:35:21.747 ************************************ 00:35:21.747 START TEST nvmf_dif 00:35:21.747 ************************************ 00:35:21.747 11:17:12 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:21.747 * Looking for test storage... 00:35:21.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.747 11:17:12 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:21.747 11:17:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:21.747 11:17:12 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:21.747 11:17:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.747 11:17:12 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.748 --rc genhtml_branch_coverage=1 00:35:21.748 --rc genhtml_function_coverage=1 00:35:21.748 --rc genhtml_legend=1 00:35:21.748 --rc geninfo_all_blocks=1 00:35:21.748 --rc geninfo_unexecuted_blocks=1 00:35:21.748 00:35:21.748 ' 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.748 --rc genhtml_branch_coverage=1 00:35:21.748 --rc genhtml_function_coverage=1 00:35:21.748 --rc genhtml_legend=1 00:35:21.748 --rc geninfo_all_blocks=1 00:35:21.748 --rc geninfo_unexecuted_blocks=1 00:35:21.748 00:35:21.748 ' 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.748 --rc genhtml_branch_coverage=1 00:35:21.748 --rc genhtml_function_coverage=1 00:35:21.748 --rc genhtml_legend=1 00:35:21.748 --rc geninfo_all_blocks=1 00:35:21.748 --rc geninfo_unexecuted_blocks=1 00:35:21.748 00:35:21.748 ' 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:21.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.748 --rc genhtml_branch_coverage=1 00:35:21.748 --rc genhtml_function_coverage=1 00:35:21.748 --rc genhtml_legend=1 00:35:21.748 --rc geninfo_all_blocks=1 00:35:21.748 --rc geninfo_unexecuted_blocks=1 00:35:21.748 00:35:21.748 ' 00:35:21.748 11:17:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.748 11:17:12 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.748 11:17:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.748 11:17:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.748 11:17:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.748 11:17:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:21.748 11:17:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.748 11:17:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:21.748 11:17:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:21.748 11:17:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:21.748 11:17:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:21.748 11:17:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.748 11:17:12 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.748 11:17:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:28.330 11:17:19 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:28.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:28.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:28.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:28.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.331 11:17:19 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:28.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:35:28.593 00:35:28.593 --- 10.0.0.2 ping statistics --- 00:35:28.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.593 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:35:28.593 00:35:28.593 --- 10.0.0.1 ping statistics --- 00:35:28.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.593 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:28.593 11:17:19 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:31.898 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:31.898 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:31.898 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:32.469 11:17:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:32.469 11:17:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3555529 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3555529 00:35:32.469 11:17:23 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3555529 ']' 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:32.469 11:17:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.469 [2024-11-06 11:17:23.700231] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:35:32.469 [2024-11-06 11:17:23.700284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.469 [2024-11-06 11:17:23.777470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.469 [2024-11-06 11:17:23.812941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.469 [2024-11-06 11:17:23.812972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.469 [2024-11-06 11:17:23.812980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.469 [2024-11-06 11:17:23.812986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.470 [2024-11-06 11:17:23.812992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.470 [2024-11-06 11:17:23.813570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:35:33.412 11:17:24 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.412 11:17:24 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.412 11:17:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:33.412 11:17:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.412 [2024-11-06 11:17:24.525462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.412 11:17:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:33.412 11:17:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.412 ************************************ 00:35:33.412 START TEST fio_dif_1_default 00:35:33.412 ************************************ 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.412 bdev_null0 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.412 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:33.413 [2024-11-06 11:17:24.613850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.413 { 00:35:33.413 "params": { 00:35:33.413 "name": "Nvme$subsystem", 00:35:33.413 "trtype": "$TEST_TRANSPORT", 00:35:33.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.413 "adrfam": "ipv4", 00:35:33.413 "trsvcid": "$NVMF_PORT", 00:35:33.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.413 "hdgst": ${hdgst:-false}, 00:35:33.413 "ddgst": ${ddgst:-false} 00:35:33.413 }, 00:35:33.413 "method": "bdev_nvme_attach_controller" 00:35:33.413 } 00:35:33.413 EOF 00:35:33.413 )") 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:33.413 "params": { 00:35:33.413 "name": "Nvme0", 00:35:33.413 "trtype": "tcp", 00:35:33.413 "traddr": "10.0.0.2", 00:35:33.413 "adrfam": "ipv4", 00:35:33.413 "trsvcid": "4420", 00:35:33.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:33.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:33.413 "hdgst": false, 00:35:33.413 "ddgst": false 00:35:33.413 }, 00:35:33.413 "method": "bdev_nvme_attach_controller" 00:35:33.413 }' 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:33.413 11:17:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.673 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:33.673 fio-3.35 00:35:33.673 Starting 1 thread 00:35:45.907 00:35:45.907 filename0: (groupid=0, jobs=1): err= 0: pid=3556059: Wed Nov 6 11:17:35 2024 00:35:45.907 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:35:45.907 slat (nsec): min=5381, max=31322, avg=6133.74, stdev=1541.41 00:35:45.907 clat (usec): min=40872, max=42368, avg=40997.67, stdev=128.00 00:35:45.907 lat (usec): min=40880, max=42400, avg=41003.81, stdev=128.48 00:35:45.907 clat percentiles (usec): 00:35:45.907 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:45.907 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:45.907 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:45.907 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:45.907 | 99.99th=[42206] 00:35:45.907 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:35:45.907 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:45.907 lat (msec) : 50=100.00% 00:35:45.907 cpu : usr=93.84%, sys=5.95%, ctx=9, majf=0, minf=226 00:35:45.907 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.907 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.907 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:45.907 00:35:45.907 Run status group 0 (all jobs): 00:35:45.907 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10008-10008msec 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.907 00:35:45.907 real 0m11.254s 00:35:45.907 user 0m27.984s 00:35:45.907 sys 0m0.917s 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:45.907 ************************************ 00:35:45.907 END TEST fio_dif_1_default 00:35:45.907 ************************************ 00:35:45.907 11:17:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:45.907 11:17:35 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:45.907 11:17:35 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:45.907 ************************************ 00:35:45.907 START TEST fio_dif_1_multi_subsystems 00:35:45.907 ************************************ 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.907 bdev_null0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.907 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.908 [2024-11-06 11:17:35.947466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.908 bdev_null1 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.908 11:17:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:45.908 { 00:35:45.908 "params": { 00:35:45.908 "name": "Nvme$subsystem", 00:35:45.908 "trtype": "$TEST_TRANSPORT", 00:35:45.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.908 "adrfam": "ipv4", 00:35:45.908 "trsvcid": "$NVMF_PORT", 00:35:45.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.908 "hdgst": ${hdgst:-false}, 00:35:45.908 "ddgst": ${ddgst:-false} 00:35:45.908 }, 00:35:45.908 "method": "bdev_nvme_attach_controller" 00:35:45.908 } 00:35:45.908 EOF 00:35:45.908 )") 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:45.908 { 00:35:45.908 "params": { 00:35:45.908 "name": "Nvme$subsystem", 00:35:45.908 "trtype": "$TEST_TRANSPORT", 00:35:45.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.908 "adrfam": "ipv4", 00:35:45.908 "trsvcid": "$NVMF_PORT", 00:35:45.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.908 "hdgst": ${hdgst:-false}, 00:35:45.908 "ddgst": ${ddgst:-false} 00:35:45.908 }, 00:35:45.908 "method": "bdev_nvme_attach_controller" 00:35:45.908 } 00:35:45.908 EOF 00:35:45.908 )") 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:45.908 "params": { 00:35:45.908 "name": "Nvme0", 00:35:45.908 "trtype": "tcp", 00:35:45.908 "traddr": "10.0.0.2", 00:35:45.908 "adrfam": "ipv4", 00:35:45.908 "trsvcid": "4420", 00:35:45.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.908 "hdgst": false, 00:35:45.908 "ddgst": false 00:35:45.908 }, 00:35:45.908 "method": "bdev_nvme_attach_controller" 00:35:45.908 },{ 00:35:45.908 "params": { 00:35:45.908 "name": "Nvme1", 00:35:45.908 "trtype": "tcp", 00:35:45.908 "traddr": "10.0.0.2", 00:35:45.908 "adrfam": "ipv4", 00:35:45.908 "trsvcid": "4420", 00:35:45.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:45.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:45.908 "hdgst": false, 00:35:45.908 "ddgst": false 00:35:45.908 }, 00:35:45.908 "method": "bdev_nvme_attach_controller" 00:35:45.908 }' 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:45.908 11:17:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.908 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:45.908 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:45.908 fio-3.35 00:35:45.908 Starting 2 threads 00:35:55.913 00:35:55.913 filename0: (groupid=0, jobs=1): err= 0: pid=3558424: Wed Nov 6 11:17:47 2024 00:35:55.913 read: IOPS=96, BW=386KiB/s (395kB/s)(3856KiB/10001msec) 00:35:55.913 slat (nsec): min=5395, max=32882, avg=6246.62, stdev=1664.32 00:35:55.913 clat (usec): min=40885, max=43639, avg=41479.66, stdev=609.87 00:35:55.913 lat (usec): min=40893, max=43672, avg=41485.91, stdev=609.97 00:35:55.913 clat percentiles (usec): 00:35:55.913 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:55.913 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:35:55.913 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:35:55.913 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:35:55.913 | 99.99th=[43779] 00:35:55.913 bw ( KiB/s): min= 384, max= 384, per=33.74%, avg=384.00, stdev= 0.00, samples=19 00:35:55.913 iops : min= 96, max= 96, avg=96.00, stdev= 0.00, samples=19 00:35:55.913 lat (msec) : 50=100.00% 00:35:55.913 cpu : usr=95.23%, sys=4.56%, ctx=9, majf=0, minf=110 00:35:55.913 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.913 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.913 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:55.913 filename1: (groupid=0, jobs=1): err= 0: pid=3558425: Wed Nov 6 11:17:47 2024 00:35:55.913 read: IOPS=188, BW=754KiB/s (772kB/s)(7568KiB/10038msec) 00:35:55.913 slat (nsec): min=5398, max=33501, avg=6382.94, stdev=1747.89 00:35:55.913 clat (usec): min=706, max=43085, avg=21203.78, stdev=20295.38 00:35:55.913 lat (usec): min=713, max=43091, avg=21210.16, stdev=20295.28 00:35:55.913 clat percentiles (usec): 00:35:55.913 | 1.00th=[ 816], 5.00th=[ 898], 10.00th=[ 922], 20.00th=[ 938], 00:35:55.913 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 2114], 60.00th=[41157], 00:35:55.913 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:55.913 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:35:55.913 | 99.99th=[43254] 00:35:55.913 bw ( KiB/s): min= 704, max= 768, per=66.34%, avg=755.20, stdev=26.27, samples=20 00:35:55.913 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:35:55.913 lat (usec) : 750=0.42%, 1000=44.40% 00:35:55.913 lat (msec) : 2=5.07%, 4=0.21%, 50=49.89% 00:35:55.913 cpu : usr=95.18%, sys=4.60%, ctx=5, majf=0, minf=178 00:35:55.913 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.913 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.913 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:55.913 00:35:55.913 Run status group 0 (all jobs): 00:35:55.913 READ: bw=1138KiB/s (1165kB/s), 386KiB/s-754KiB/s (395kB/s-772kB/s), io=11.2MiB (11.7MB), run=10001-10038msec 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.174 00:35:56.174 real 0m11.489s 00:35:56.174 user 0m32.755s 00:35:56.174 sys 0m1.271s 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:56.174 11:17:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:56.174 ************************************ 00:35:56.174 END TEST fio_dif_1_multi_subsystems 00:35:56.174 ************************************ 00:35:56.174 11:17:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:56.174 11:17:47 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:56.175 11:17:47 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:56.175 11:17:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:56.175 ************************************ 00:35:56.175 START TEST fio_dif_rand_params 00:35:56.175 ************************************ 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.175 bdev_null0 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.175 [2024-11-06 11:17:47.517592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:56.175 { 00:35:56.175 "params": { 00:35:56.175 "name": "Nvme$subsystem", 00:35:56.175 "trtype": "$TEST_TRANSPORT", 00:35:56.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.175 "adrfam": "ipv4", 00:35:56.175 "trsvcid": "$NVMF_PORT", 00:35:56.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.175 "hdgst": ${hdgst:-false}, 00:35:56.175 "ddgst": ${ddgst:-false} 00:35:56.175 }, 00:35:56.175 "method": "bdev_nvme_attach_controller" 00:35:56.175 } 00:35:56.175 EOF 00:35:56.175 )") 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:56.175 "params": { 00:35:56.175 "name": "Nvme0", 00:35:56.175 "trtype": "tcp", 00:35:56.175 "traddr": "10.0.0.2", 00:35:56.175 "adrfam": "ipv4", 00:35:56.175 "trsvcid": "4420", 00:35:56.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:56.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:56.175 "hdgst": false, 00:35:56.175 "ddgst": false 00:35:56.175 }, 00:35:56.175 "method": "bdev_nvme_attach_controller" 00:35:56.175 }' 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:56.175 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:56.452 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:56.452 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:56.452 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:56.452 11:17:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.717 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:56.717 ... 00:35:56.717 fio-3.35 00:35:56.717 Starting 3 threads 00:36:03.295 00:36:03.295 filename0: (groupid=0, jobs=1): err= 0: pid=3560768: Wed Nov 6 11:17:53 2024 00:36:03.295 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(112MiB/5028msec) 00:36:03.295 slat (nsec): min=2819, max=21602, avg=6001.44, stdev=698.12 00:36:03.295 clat (usec): min=5688, max=91840, avg=16823.16, stdev=17193.25 00:36:03.295 lat (usec): min=5694, max=91846, avg=16829.16, stdev=17193.24 00:36:03.295 clat percentiles (usec): 00:36:03.295 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7898], 00:36:03.295 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10552], 00:36:03.295 | 70.00th=[11469], 80.00th=[13042], 90.00th=[49546], 95.00th=[51119], 00:36:03.295 | 99.00th=[90702], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:36:03.295 | 99.99th=[91751] 00:36:03.295 bw ( KiB/s): min=15360, max=33280, per=31.50%, avg=22864.00, stdev=5824.92, samples=10 00:36:03.295 iops : min= 120, max= 260, avg=178.60, stdev=45.54, samples=10 00:36:03.295 lat (msec) : 10=52.12%, 20=30.80%, 50=8.59%, 100=8.48% 00:36:03.295 cpu : usr=95.66%, sys=4.12%, ctx=11, majf=0, minf=35 00:36:03.295 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.295 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.295 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:03.295 filename0: (groupid=0, jobs=1): err= 0: pid=3560769: Wed Nov 6 11:17:53 2024 00:36:03.295 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(152MiB/5044msec) 00:36:03.295 slat (nsec): min=5444, max=31021, avg=7700.83, stdev=1728.72 00:36:03.295 clat (usec): min=5220, max=90988, avg=12389.85, stdev=10799.15 00:36:03.295 lat (usec): min=5229, max=90996, avg=12397.55, stdev=10799.45 00:36:03.295 clat percentiles (usec): 00:36:03.295 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7570], 00:36:03.295 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:36:03.295 | 70.00th=[10945], 80.00th=[12518], 90.00th=[15139], 95.00th=[47973], 00:36:03.295 | 99.00th=[51119], 99.50th=[52167], 99.90th=[87557], 99.95th=[90702], 00:36:03.295 | 99.99th=[90702] 00:36:03.295 bw ( KiB/s): min=12825, max=41472, per=42.82%, avg=31080.90, stdev=8869.11, samples=10 00:36:03.295 iops : min= 100, max= 324, avg=242.80, stdev=69.33, samples=10 00:36:03.295 lat (msec) : 10=58.92%, 20=33.94%, 50=5.26%, 100=1.89% 00:36:03.295 cpu : usr=95.46%, sys=4.30%, ctx=9, majf=0, minf=151 00:36:03.295 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.295 issued rwts: total=1217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.295 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:03.295 filename0: (groupid=0, jobs=1): err= 0: pid=3560770: Wed Nov 6 11:17:53 2024 00:36:03.295 read: IOPS=148, BW=18.5MiB/s (19.4MB/s)(93.5MiB/5045msec) 00:36:03.295 slat (nsec): min=5497, max=31243, avg=7986.57, stdev=1676.00 00:36:03.295 clat (msec): min=5, max=131, avg=20.17, stdev=20.01 00:36:03.295 lat (msec): min=5, max=131, avg=20.18, stdev=20.01 00:36:03.295 clat percentiles (msec): 00:36:03.295 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:36:03.295 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:36:03.295 | 70.00th=[ 14], 80.00th=[ 48], 90.00th=[ 51], 95.00th=[ 53], 00:36:03.295 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 132], 99.95th=[ 132], 00:36:03.295 | 99.99th=[ 132] 00:36:03.295 bw ( KiB/s): min=12544, max=25804, per=26.30%, avg=19092.40, stdev=4152.25, samples=10 00:36:03.295 iops : min= 98, max= 201, avg=149.10, stdev=32.33, samples=10 00:36:03.295 lat (msec) : 10=42.25%, 20=34.63%, 50=11.63%, 100=11.36%, 250=0.13% 00:36:03.295 cpu : usr=95.84%, sys=3.89%, ctx=52, majf=0, minf=71 00:36:03.295 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.295 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.295 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:03.295 00:36:03.295 Run status group 0 (all jobs): 00:36:03.295 READ: bw=70.9MiB/s (74.3MB/s), 18.5MiB/s-30.2MiB/s (19.4MB/s-31.6MB/s), io=358MiB (375MB), run=5028-5045msec 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:03.295 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 bdev_null0 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 [2024-11-06 11:17:53.711002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 bdev_null1 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 bdev_null2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.296 { 00:36:03.296 "params": { 00:36:03.296 "name": "Nvme$subsystem", 00:36:03.296 "trtype": "$TEST_TRANSPORT", 00:36:03.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.296 "adrfam": "ipv4", 00:36:03.296 "trsvcid": "$NVMF_PORT", 00:36:03.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.296 "hdgst": ${hdgst:-false}, 00:36:03.296 "ddgst": ${ddgst:-false} 00:36:03.296 }, 00:36:03.296 "method": "bdev_nvme_attach_controller" 00:36:03.296 } 00:36:03.296 EOF 00:36:03.296 )") 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.296 { 00:36:03.296 "params": { 00:36:03.296 "name": "Nvme$subsystem", 00:36:03.296 "trtype": "$TEST_TRANSPORT", 00:36:03.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.296 "adrfam": "ipv4", 00:36:03.296 "trsvcid": "$NVMF_PORT", 00:36:03.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.296 "hdgst": ${hdgst:-false}, 00:36:03.296 "ddgst": ${ddgst:-false} 00:36:03.296 }, 00:36:03.296 "method": "bdev_nvme_attach_controller" 00:36:03.296 } 00:36:03.296 EOF 00:36:03.296 )") 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.296 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.297 { 00:36:03.297 "params": { 00:36:03.297 "name": "Nvme$subsystem", 00:36:03.297 "trtype": "$TEST_TRANSPORT", 00:36:03.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.297 "adrfam": "ipv4", 00:36:03.297 "trsvcid": "$NVMF_PORT", 00:36:03.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.297 "hdgst": ${hdgst:-false}, 00:36:03.297 "ddgst": ${ddgst:-false} 00:36:03.297 }, 00:36:03.297 "method": "bdev_nvme_attach_controller" 00:36:03.297 } 00:36:03.297 EOF 00:36:03.297 )") 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:03.297 "params": { 00:36:03.297 "name": "Nvme0", 00:36:03.297 "trtype": "tcp", 00:36:03.297 "traddr": "10.0.0.2", 00:36:03.297 "adrfam": "ipv4", 00:36:03.297 "trsvcid": "4420", 00:36:03.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.297 "hdgst": false, 00:36:03.297 "ddgst": false 00:36:03.297 }, 00:36:03.297 "method": "bdev_nvme_attach_controller" 00:36:03.297 },{ 00:36:03.297 "params": { 00:36:03.297 "name": "Nvme1", 00:36:03.297 "trtype": "tcp", 00:36:03.297 "traddr": "10.0.0.2", 00:36:03.297 "adrfam": "ipv4", 00:36:03.297 "trsvcid": "4420", 00:36:03.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.297 "hdgst": false, 00:36:03.297 "ddgst": false 00:36:03.297 }, 00:36:03.297 "method": "bdev_nvme_attach_controller" 00:36:03.297 },{ 00:36:03.297 "params": { 00:36:03.297 "name": "Nvme2", 00:36:03.297 "trtype": "tcp", 00:36:03.297 "traddr": "10.0.0.2", 00:36:03.297 "adrfam": "ipv4", 00:36:03.297 "trsvcid": "4420", 00:36:03.297 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:03.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:03.297 "hdgst": false, 00:36:03.297 "ddgst": false 00:36:03.297 }, 00:36:03.297 "method": "bdev_nvme_attach_controller" 00:36:03.297 }' 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.297 11:17:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.297 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:03.297 ... 00:36:03.297 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:03.297 ... 00:36:03.297 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:03.297 ... 00:36:03.297 fio-3.35 00:36:03.297 Starting 24 threads 00:36:15.531 00:36:15.531 filename0: (groupid=0, jobs=1): err= 0: pid=3562271: Wed Nov 6 11:18:05 2024 00:36:15.531 read: IOPS=497, BW=1990KiB/s (2037kB/s)(19.4MiB/10008msec) 00:36:15.531 slat (usec): min=5, max=121, avg=25.29, stdev=19.89 00:36:15.531 clat (usec): min=8044, max=57509, avg=31935.06, stdev=3217.45 00:36:15.531 lat (usec): min=8062, max=57518, avg=31960.35, stdev=3218.87 00:36:15.531 clat percentiles (usec): 00:36:15.531 | 1.00th=[16581], 5.00th=[27657], 10.00th=[31589], 20.00th=[31851], 00:36:15.531 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.531 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.531 | 99.00th=[35390], 99.50th=[40633], 99.90th=[56886], 99.95th=[57410], 00:36:15.531 | 99.99th=[57410] 00:36:15.531 bw ( KiB/s): min= 1920, max= 2400, per=4.20%, avg=1988.21, stdev=119.00, samples=19 00:36:15.531 iops : min= 480, max= 600, avg=497.05, stdev=29.75, samples=19 00:36:15.531 lat (msec) : 10=0.26%, 20=1.47%, 50=97.95%, 100=0.32% 00:36:15.531 cpu : usr=98.75%, sys=0.84%, ctx=71, majf=0, minf=33 00:36:15.531 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:15.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 issued rwts: total=4978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.531 filename0: (groupid=0, jobs=1): err= 0: pid=3562272: Wed Nov 6 11:18:05 2024 00:36:15.531 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10001msec) 00:36:15.531 slat (nsec): min=5562, max=90066, avg=23701.76, stdev=15578.23 00:36:15.531 clat (usec): min=16056, max=57857, avg=32331.83, stdev=2988.51 00:36:15.531 lat (usec): min=16062, max=57873, avg=32355.54, stdev=2989.40 00:36:15.531 clat percentiles (usec): 00:36:15.531 | 1.00th=[21103], 5.00th=[30802], 10.00th=[31851], 20.00th=[32113], 00:36:15.531 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.531 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.531 | 99.00th=[43779], 99.50th=[46400], 99.90th=[57934], 99.95th=[57934], 00:36:15.531 | 99.99th=[57934] 00:36:15.531 bw ( KiB/s): min= 1792, max= 2080, per=4.15%, avg=1962.95, stdev=74.68, samples=19 00:36:15.531 iops : min= 448, max= 520, avg=490.74, stdev=18.67, samples=19 00:36:15.531 lat (msec) : 20=0.73%, 50=98.94%, 100=0.33% 00:36:15.531 cpu : usr=99.14%, sys=0.52%, ctx=13, majf=0, minf=28 00:36:15.531 IO depths : 1=5.2%, 2=10.6%, 4=23.0%, 8=53.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:15.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.531 filename0: (groupid=0, jobs=1): err= 0: pid=3562273: Wed Nov 6 11:18:05 2024 00:36:15.531 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10007msec) 00:36:15.531 slat (usec): min=5, max=128, avg=21.00, stdev=17.19 00:36:15.531 clat (usec): min=8982, max=57777, avg=32260.80, stdev=3525.53 00:36:15.531 lat (usec): min=8988, max=57795, avg=32281.80, stdev=3526.47 00:36:15.531 clat percentiles (usec): 00:36:15.531 | 1.00th=[19006], 5.00th=[26608], 10.00th=[31851], 20.00th=[32113], 00:36:15.531 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.531 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:36:15.531 | 99.00th=[45876], 99.50th=[51119], 99.90th=[57934], 99.95th=[57934], 00:36:15.531 | 99.99th=[57934] 00:36:15.531 bw ( KiB/s): min= 1795, max= 2112, per=4.15%, avg=1964.79, stdev=75.72, samples=19 00:36:15.531 iops : min= 448, max= 528, avg=491.16, stdev=19.02, samples=19 00:36:15.531 lat (msec) : 10=0.12%, 20=1.24%, 50=97.95%, 100=0.69% 00:36:15.531 cpu : usr=99.09%, sys=0.59%, ctx=14, majf=0, minf=14 00:36:15.531 IO depths : 1=3.3%, 2=6.8%, 4=14.6%, 8=64.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:36:15.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 complete : 0=0.0%, 4=91.8%, 8=4.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 issued rwts: total=4936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.531 filename0: (groupid=0, jobs=1): err= 0: pid=3562274: Wed Nov 6 11:18:05 2024 00:36:15.531 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10016msec) 00:36:15.531 slat (nsec): min=5444, max=90405, avg=11740.12, stdev=9457.64 00:36:15.531 clat (usec): min=2019, max=41004, avg=31323.51, stdev=5286.72 00:36:15.531 lat (usec): min=2032, max=41014, avg=31335.25, stdev=5286.34 00:36:15.531 clat percentiles (usec): 00:36:15.531 | 1.00th=[ 2212], 5.00th=[22938], 10.00th=[31851], 20.00th=[32113], 00:36:15.531 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:15.531 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.531 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[41157], 00:36:15.531 | 99.99th=[41157] 00:36:15.531 bw ( KiB/s): min= 1920, max= 3296, per=4.30%, avg=2033.60, stdev=303.49, samples=20 00:36:15.531 iops : min= 480, max= 824, avg=508.40, stdev=75.87, samples=20 00:36:15.531 lat (msec) : 4=1.88%, 10=0.78%, 20=1.33%, 50=96.00% 00:36:15.531 cpu : usr=98.57%, sys=1.12%, ctx=55, majf=0, minf=26 00:36:15.531 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:15.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 issued rwts: total=5100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.531 filename0: (groupid=0, jobs=1): err= 0: pid=3562275: Wed Nov 6 11:18:05 2024 00:36:15.531 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.5MiB/10048msec) 00:36:15.531 slat (nsec): min=4700, max=98117, avg=21188.24, stdev=17191.66 00:36:15.531 clat (usec): min=13148, max=61088, avg=32078.85, stdev=4690.45 00:36:15.531 lat (usec): min=13187, max=61098, avg=32100.04, stdev=4691.75 00:36:15.531 clat percentiles (usec): 00:36:15.531 | 1.00th=[20579], 5.00th=[22676], 10.00th=[26608], 20.00th=[31851], 00:36:15.531 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.531 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[39060], 00:36:15.531 | 99.00th=[51119], 99.50th=[53216], 99.90th=[61080], 99.95th=[61080], 00:36:15.531 | 99.99th=[61080] 00:36:15.531 bw ( KiB/s): min= 1808, max= 2160, per=4.19%, avg=1983.20, stdev=92.78, samples=20 00:36:15.531 iops : min= 452, max= 540, avg=495.80, stdev=23.20, samples=20 00:36:15.531 lat (msec) : 20=0.72%, 50=98.07%, 100=1.20% 00:36:15.531 cpu : usr=98.73%, sys=0.84%, ctx=42, majf=0, minf=21 00:36:15.531 IO depths : 1=2.0%, 2=4.1%, 4=10.8%, 8=70.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:36:15.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 complete : 0=0.0%, 4=90.8%, 8=5.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.531 filename0: (groupid=0, jobs=1): err= 0: pid=3562276: Wed Nov 6 11:18:05 2024 00:36:15.531 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10011msec) 00:36:15.531 slat (nsec): min=5440, max=91298, avg=21311.51, stdev=15183.03 00:36:15.531 clat (usec): min=16119, max=37062, avg=32430.21, stdev=1136.14 00:36:15.531 lat (usec): min=16128, max=37070, avg=32451.52, stdev=1135.12 00:36:15.531 clat percentiles (usec): 00:36:15.531 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:15.531 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.531 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.531 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:36:15.531 | 99.99th=[36963] 00:36:15.531 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1960.42, stdev=74.55, samples=19 00:36:15.531 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:36:15.531 lat (msec) : 20=0.08%, 50=99.92% 00:36:15.531 cpu : usr=98.97%, sys=0.69%, ctx=15, majf=0, minf=27 00:36:15.531 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:15.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.531 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.531 filename0: (groupid=0, jobs=1): err= 0: pid=3562277: Wed Nov 6 11:18:05 2024 00:36:15.531 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10005msec) 00:36:15.531 slat (usec): min=5, max=101, avg=27.64, stdev=18.63 00:36:15.531 clat (usec): min=14107, max=57160, avg=32096.09, stdev=2951.16 00:36:15.531 lat (usec): min=14116, max=57179, avg=32123.73, stdev=2953.41 00:36:15.531 clat percentiles (usec): 00:36:15.531 | 1.00th=[21627], 5.00th=[30802], 10.00th=[31851], 20.00th=[31851], 00:36:15.531 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.531 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:36:15.531 | 99.00th=[38011], 99.50th=[50070], 99.90th=[56886], 99.95th=[57410], 00:36:15.531 | 99.99th=[57410] 00:36:15.532 bw ( KiB/s): min= 1795, max= 2144, per=4.16%, avg=1968.16, stdev=86.48, samples=19 00:36:15.532 iops : min= 448, max= 536, avg=492.00, stdev=21.71, samples=19 00:36:15.532 lat (msec) : 20=0.40%, 50=99.11%, 100=0.49% 00:36:15.532 cpu : usr=98.52%, sys=0.89%, ctx=101, majf=0, minf=25 00:36:15.532 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.532 filename0: (groupid=0, jobs=1): err= 0: pid=3562278: Wed Nov 6 11:18:05 2024 00:36:15.532 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10024msec) 00:36:15.532 slat (nsec): min=5467, max=85444, avg=14615.01, stdev=10943.52 00:36:15.532 clat (usec): min=10387, max=43605, avg=32318.04, stdev=2471.98 00:36:15.532 lat (usec): min=10402, max=43612, avg=32332.65, stdev=2471.25 00:36:15.532 clat percentiles (usec): 00:36:15.532 | 1.00th=[21103], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:15.532 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:15.532 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.532 | 99.00th=[35390], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:36:15.532 | 99.99th=[43779] 00:36:15.532 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=1971.20, stdev=76.58, samples=20 00:36:15.532 iops : min= 480, max= 544, avg=492.80, stdev=19.14, samples=20 00:36:15.532 lat (msec) : 20=0.93%, 50=99.07% 00:36:15.532 cpu : usr=98.77%, sys=0.90%, ctx=54, majf=0, minf=27 00:36:15.532 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.532 filename1: (groupid=0, jobs=1): err= 0: pid=3562279: Wed Nov 6 11:18:05 2024 00:36:15.532 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10006msec) 00:36:15.532 slat (nsec): min=5485, max=96430, avg=28163.19, stdev=16286.97 00:36:15.532 clat (usec): min=14140, max=57144, avg=32429.88, stdev=1930.30 00:36:15.532 lat (usec): min=14150, max=57161, avg=32458.04, stdev=1930.33 00:36:15.532 clat percentiles (usec): 00:36:15.532 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:15.532 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.532 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.532 | 99.00th=[34866], 99.50th=[36439], 99.90th=[56886], 99.95th=[56886], 00:36:15.532 | 99.99th=[56886] 00:36:15.532 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1953.84, stdev=71.56, samples=19 00:36:15.532 iops : min= 448, max= 512, avg=488.42, stdev=17.98, samples=19 00:36:15.532 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:36:15.532 cpu : usr=99.06%, sys=0.59%, ctx=14, majf=0, minf=24 00:36:15.532 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.532 filename1: (groupid=0, jobs=1): err= 0: pid=3562280: Wed Nov 6 11:18:05 2024 00:36:15.532 read: IOPS=497, BW=1991KiB/s (2039kB/s)(19.5MiB/10008msec) 00:36:15.532 slat (usec): min=5, max=108, avg=18.75, stdev=17.50 00:36:15.532 clat (usec): min=10410, max=46585, avg=31974.62, stdev=2914.97 00:36:15.532 lat (usec): min=10451, max=46593, avg=31993.37, stdev=2915.00 00:36:15.532 clat percentiles (usec): 00:36:15.532 | 1.00th=[16909], 5.00th=[29230], 10.00th=[31851], 20.00th=[32113], 00:36:15.532 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:15.532 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.532 | 99.00th=[35390], 99.50th=[36439], 99.90th=[44827], 99.95th=[46400], 00:36:15.532 | 99.99th=[46400] 00:36:15.532 bw ( KiB/s): min= 1920, max= 2480, per=4.20%, avg=1989.89, stdev=133.14, samples=19 00:36:15.532 iops : min= 480, max= 620, avg=497.47, stdev=33.29, samples=19 00:36:15.532 lat (msec) : 20=1.43%, 50=98.57% 00:36:15.532 cpu : usr=98.79%, sys=0.80%, ctx=44, majf=0, minf=25 00:36:15.532 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.532 filename1: (groupid=0, jobs=1): err= 0: pid=3562281: Wed Nov 6 11:18:05 2024 00:36:15.532 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10008msec) 00:36:15.532 slat (usec): min=4, max=100, avg=23.53, stdev=15.80 00:36:15.532 clat (usec): min=15773, max=58047, avg=32365.25, stdev=2594.15 00:36:15.532 lat (usec): min=15779, max=58053, avg=32388.78, stdev=2595.36 00:36:15.532 clat percentiles (usec): 00:36:15.532 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:15.532 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.532 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.532 | 99.00th=[41157], 99.50th=[47449], 99.90th=[57934], 99.95th=[57934], 00:36:15.532 | 99.99th=[57934] 00:36:15.532 bw ( KiB/s): min= 1920, max= 2144, per=4.14%, avg=1961.26, stdev=65.82, samples=19 00:36:15.532 iops : min= 480, max= 536, avg=490.32, stdev=16.46, samples=19 00:36:15.532 lat (msec) : 20=0.57%, 50=99.19%, 100=0.24% 00:36:15.532 cpu : usr=99.06%, sys=0.60%, ctx=15, majf=0, minf=28 00:36:15.532 IO depths : 1=5.7%, 2=11.5%, 4=23.4%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.532 filename1: (groupid=0, jobs=1): err= 0: pid=3562282: Wed Nov 6 11:18:05 2024 00:36:15.532 read: IOPS=493, BW=1975KiB/s (2023kB/s)(19.3MiB/10015msec) 00:36:15.532 slat (nsec): min=5451, max=90929, avg=21538.81, stdev=16544.34 00:36:15.532 clat (usec): min=13952, max=53597, avg=32213.60, stdev=2695.27 00:36:15.532 lat (usec): min=13962, max=53638, avg=32235.14, stdev=2696.16 00:36:15.532 clat percentiles (usec): 00:36:15.532 | 1.00th=[21365], 5.00th=[29754], 10.00th=[31851], 20.00th=[32113], 00:36:15.532 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.532 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.532 | 99.00th=[41157], 99.50th=[42206], 99.90th=[53740], 99.95th=[53740], 00:36:15.532 | 99.99th=[53740] 00:36:15.532 bw ( KiB/s): min= 1920, max= 2192, per=4.18%, avg=1977.30, stdev=86.94, samples=20 00:36:15.532 iops : min= 480, max= 548, avg=494.25, stdev=21.68, samples=20 00:36:15.532 lat (msec) : 20=0.97%, 50=98.91%, 100=0.12% 00:36:15.532 cpu : usr=98.93%, sys=0.72%, ctx=16, majf=0, minf=21 00:36:15.532 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 complete : 0=0.0%, 4=93.6%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.532 filename1: (groupid=0, jobs=1): err= 0: pid=3562283: Wed Nov 6 11:18:05 2024 00:36:15.532 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.3MiB/10008msec) 00:36:15.532 slat (usec): min=5, max=102, avg=21.23, stdev=15.12 00:36:15.532 clat (usec): min=8971, max=67239, avg=32271.35, stdev=3213.32 00:36:15.532 lat (usec): min=8977, max=67254, avg=32292.59, stdev=3214.66 00:36:15.532 clat percentiles (usec): 00:36:15.532 | 1.00th=[22414], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:15.532 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.532 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.532 | 99.00th=[34866], 99.50th=[54789], 99.90th=[67634], 99.95th=[67634], 00:36:15.532 | 99.99th=[67634] 00:36:15.532 bw ( KiB/s): min= 1792, max= 2208, per=4.14%, avg=1961.26, stdev=88.81, samples=19 00:36:15.532 iops : min= 448, max= 552, avg=490.32, stdev=22.20, samples=19 00:36:15.532 lat (msec) : 10=0.32%, 20=0.37%, 50=98.74%, 100=0.57% 00:36:15.532 cpu : usr=98.94%, sys=0.72%, ctx=13, majf=0, minf=21 00:36:15.532 IO depths : 1=5.7%, 2=11.6%, 4=24.1%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:15.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.532 issued rwts: total=4930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.532 filename1: (groupid=0, jobs=1): err= 0: pid=3562284: Wed Nov 6 11:18:05 2024 00:36:15.532 read: IOPS=507, BW=2028KiB/s (2077kB/s)(19.8MiB/10007msec) 00:36:15.532 slat (nsec): min=5404, max=93310, avg=18019.12, stdev=15171.17 00:36:15.532 clat (usec): min=13211, max=49233, avg=31409.21, stdev=4090.73 00:36:15.532 lat (usec): min=13218, max=49240, avg=31427.23, stdev=4092.97 00:36:15.532 clat percentiles (usec): 00:36:15.532 | 1.00th=[16450], 5.00th=[21890], 10.00th=[25297], 20.00th=[31851], 00:36:15.533 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.533 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.533 | 99.00th=[42730], 99.50th=[45351], 99.90th=[49021], 99.95th=[49021], 00:36:15.533 | 99.99th=[49021] 00:36:15.533 bw ( KiB/s): min= 1920, max= 2352, per=4.29%, avg=2028.63, stdev=134.25, samples=19 00:36:15.533 iops : min= 480, max= 588, avg=507.16, stdev=33.56, samples=19 00:36:15.533 lat (msec) : 20=2.88%, 50=97.12% 00:36:15.533 cpu : usr=98.98%, sys=0.67%, ctx=14, majf=0, minf=33 00:36:15.533 IO depths : 1=4.9%, 2=9.9%, 4=21.5%, 8=56.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 complete : 0=0.0%, 4=93.1%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 issued rwts: total=5074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.533 filename1: (groupid=0, jobs=1): err= 0: pid=3562285: Wed Nov 6 11:18:05 2024 00:36:15.533 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10024msec) 00:36:15.533 slat (nsec): min=5443, max=95395, avg=19460.77, stdev=15361.38 00:36:15.533 clat (usec): min=8722, max=48156, avg=32094.82, stdev=2950.64 00:36:15.533 lat (usec): min=8730, max=48164, avg=32114.28, stdev=2950.86 00:36:15.533 clat percentiles (usec): 00:36:15.533 | 1.00th=[12256], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:15.533 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:15.533 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.533 | 99.00th=[34866], 99.50th=[35390], 99.90th=[44827], 99.95th=[44827], 00:36:15.533 | 99.99th=[47973] 00:36:15.533 bw ( KiB/s): min= 1920, max= 2400, per=4.19%, avg=1982.40, stdev=114.90, samples=20 00:36:15.533 iops : min= 480, max= 600, avg=495.60, stdev=28.72, samples=20 00:36:15.533 lat (msec) : 10=0.52%, 20=1.49%, 50=97.99% 00:36:15.533 cpu : usr=99.03%, sys=0.62%, ctx=18, majf=0, minf=28 00:36:15.533 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.533 filename1: (groupid=0, jobs=1): err= 0: pid=3562286: Wed Nov 6 11:18:05 2024 00:36:15.533 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10011msec) 00:36:15.533 slat (nsec): min=5406, max=83801, avg=17699.04, stdev=13150.09 00:36:15.533 clat (usec): min=20583, max=36596, avg=32466.71, stdev=1064.74 00:36:15.533 lat (usec): min=20596, max=36621, avg=32484.41, stdev=1064.23 00:36:15.533 clat percentiles (usec): 00:36:15.533 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:15.533 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.533 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.533 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:36:15.533 | 99.99th=[36439] 00:36:15.533 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1960.42, stdev=74.55, samples=19 00:36:15.533 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:36:15.533 lat (msec) : 50=100.00% 00:36:15.533 cpu : usr=98.64%, sys=0.83%, ctx=72, majf=0, minf=25 00:36:15.533 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.533 filename2: (groupid=0, jobs=1): err= 0: pid=3562287: Wed Nov 6 11:18:05 2024 00:36:15.533 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10006msec) 00:36:15.533 slat (nsec): min=5397, max=98327, avg=16605.24, stdev=15075.58 00:36:15.533 clat (usec): min=8463, max=73980, avg=32532.50, stdev=3515.28 00:36:15.533 lat (usec): min=8472, max=73998, avg=32549.10, stdev=3515.46 00:36:15.533 clat percentiles (usec): 00:36:15.533 | 1.00th=[20841], 5.00th=[26870], 10.00th=[31851], 20.00th=[32113], 00:36:15.533 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:15.533 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[36439], 00:36:15.533 | 99.00th=[44827], 99.50th=[50594], 99.90th=[73925], 99.95th=[73925], 00:36:15.533 | 99.99th=[73925] 00:36:15.533 bw ( KiB/s): min= 1795, max= 2096, per=4.13%, avg=1956.79, stdev=68.80, samples=19 00:36:15.533 iops : min= 448, max= 524, avg=489.16, stdev=17.30, samples=19 00:36:15.533 lat (msec) : 10=0.12%, 20=0.69%, 50=98.68%, 100=0.51% 00:36:15.533 cpu : usr=99.05%, sys=0.61%, ctx=13, majf=0, minf=38 00:36:15.533 IO depths : 1=0.1%, 2=0.1%, 4=1.4%, 8=80.7%, 16=17.7%, 32=0.0%, >=64=0.0% 00:36:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 complete : 0=0.0%, 4=86.1%, 8=13.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 issued rwts: total=4911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.533 filename2: (groupid=0, jobs=1): err= 0: pid=3562288: Wed Nov 6 11:18:05 2024 00:36:15.533 read: IOPS=505, BW=2023KiB/s (2071kB/s)(19.8MiB/10007msec) 00:36:15.533 slat (usec): min=5, max=101, avg=25.22, stdev=18.09 00:36:15.533 clat (usec): min=11027, max=74513, avg=31427.48, stdev=4482.74 00:36:15.533 lat (usec): min=11033, max=74538, avg=31452.69, stdev=4486.69 00:36:15.533 clat percentiles (usec): 00:36:15.533 | 1.00th=[20317], 5.00th=[22414], 10.00th=[24249], 20.00th=[31589], 00:36:15.533 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:36:15.533 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:36:15.533 | 99.00th=[47449], 99.50th=[50070], 99.90th=[57934], 99.95th=[73925], 00:36:15.533 | 99.99th=[74974] 00:36:15.533 bw ( KiB/s): min= 1760, max= 2416, per=4.27%, avg=2021.89, stdev=137.04, samples=19 00:36:15.533 iops : min= 440, max= 604, avg=505.47, stdev=34.26, samples=19 00:36:15.533 lat (msec) : 20=0.91%, 50=98.56%, 100=0.53% 00:36:15.533 cpu : usr=99.10%, sys=0.56%, ctx=15, majf=0, minf=25 00:36:15.533 IO depths : 1=2.8%, 2=7.4%, 4=19.7%, 8=60.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 issued rwts: total=5060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.533 filename2: (groupid=0, jobs=1): err= 0: pid=3562289: Wed Nov 6 11:18:05 2024 00:36:15.533 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10022msec) 00:36:15.533 slat (usec): min=5, max=107, avg=25.04, stdev=17.92 00:36:15.533 clat (usec): min=14448, max=43406, avg=32334.64, stdev=1639.88 00:36:15.533 lat (usec): min=14457, max=43412, avg=32359.68, stdev=1640.60 00:36:15.533 clat percentiles (usec): 00:36:15.533 | 1.00th=[24773], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:15.533 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.533 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.533 | 99.00th=[34341], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:36:15.533 | 99.99th=[43254] 00:36:15.533 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1967.16, stdev=61.85, samples=19 00:36:15.533 iops : min= 480, max= 512, avg=491.79, stdev=15.46, samples=19 00:36:15.533 lat (msec) : 20=0.32%, 50=99.68% 00:36:15.533 cpu : usr=98.72%, sys=0.93%, ctx=17, majf=0, minf=29 00:36:15.533 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.533 filename2: (groupid=0, jobs=1): err= 0: pid=3562290: Wed Nov 6 11:18:05 2024 00:36:15.533 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10007msec) 00:36:15.533 slat (nsec): min=5455, max=99869, avg=25926.39, stdev=15889.61 00:36:15.533 clat (usec): min=24637, max=46405, avg=32469.78, stdev=1052.79 00:36:15.533 lat (usec): min=24669, max=46423, avg=32495.71, stdev=1051.60 00:36:15.533 clat percentiles (usec): 00:36:15.533 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:15.533 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:36:15.533 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.533 | 99.00th=[34341], 99.50th=[34866], 99.90th=[46400], 99.95th=[46400], 00:36:15.533 | 99.99th=[46400] 00:36:15.533 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1953.68, stdev=71.93, samples=19 00:36:15.533 iops : min= 448, max= 512, avg=488.42, stdev=17.98, samples=19 00:36:15.533 lat (msec) : 50=100.00% 00:36:15.533 cpu : usr=98.96%, sys=0.70%, ctx=17, majf=0, minf=23 00:36:15.533 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:15.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.533 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.534 filename2: (groupid=0, jobs=1): err= 0: pid=3562291: Wed Nov 6 11:18:05 2024 00:36:15.534 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:36:15.534 slat (usec): min=5, max=101, avg=29.35, stdev=17.90 00:36:15.534 clat (usec): min=16010, max=57517, avg=32534.57, stdev=3001.71 00:36:15.534 lat (usec): min=16042, max=57534, avg=32563.92, stdev=3001.40 00:36:15.534 clat percentiles (usec): 00:36:15.534 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:36:15.534 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.534 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[34341], 00:36:15.534 | 99.00th=[46924], 99.50th=[49546], 99.90th=[57410], 99.95th=[57410], 00:36:15.534 | 99.99th=[57410] 00:36:15.534 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1947.11, stdev=70.00, samples=19 00:36:15.534 iops : min= 448, max= 512, avg=486.74, stdev=17.59, samples=19 00:36:15.534 lat (msec) : 20=0.33%, 50=99.18%, 100=0.49% 00:36:15.534 cpu : usr=98.97%, sys=0.70%, ctx=14, majf=0, minf=20 00:36:15.534 IO depths : 1=5.0%, 2=10.6%, 4=23.0%, 8=53.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:15.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.534 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.534 filename2: (groupid=0, jobs=1): err= 0: pid=3562292: Wed Nov 6 11:18:05 2024 00:36:15.534 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10015msec) 00:36:15.534 slat (nsec): min=5413, max=94346, avg=16897.43, stdev=13581.92 00:36:15.534 clat (usec): min=13485, max=52447, avg=32167.71, stdev=2749.95 00:36:15.534 lat (usec): min=13498, max=52467, avg=32184.61, stdev=2750.15 00:36:15.534 clat percentiles (usec): 00:36:15.534 | 1.00th=[18744], 5.00th=[31327], 10.00th=[31851], 20.00th=[32113], 00:36:15.534 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:15.534 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.534 | 99.00th=[34866], 99.50th=[39060], 99.90th=[52167], 99.95th=[52167], 00:36:15.534 | 99.99th=[52691] 00:36:15.534 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=1977.60, stdev=77.42, samples=20 00:36:15.534 iops : min= 480, max= 544, avg=494.40, stdev=19.35, samples=20 00:36:15.534 lat (msec) : 20=2.00%, 50=97.68%, 100=0.32% 00:36:15.534 cpu : usr=98.94%, sys=0.72%, ctx=15, majf=0, minf=30 00:36:15.534 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:15.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.534 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.534 filename2: (groupid=0, jobs=1): err= 0: pid=3562293: Wed Nov 6 11:18:05 2024 00:36:15.534 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10007msec) 00:36:15.534 slat (usec): min=5, max=156, avg=14.14, stdev=13.17 00:36:15.534 clat (usec): min=10457, max=41981, avg=32249.33, stdev=2773.19 00:36:15.534 lat (usec): min=10492, max=41989, avg=32263.47, stdev=2772.36 00:36:15.534 clat percentiles (usec): 00:36:15.534 | 1.00th=[18482], 5.00th=[31327], 10.00th=[31851], 20.00th=[32113], 00:36:15.534 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:15.534 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:36:15.534 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:15.534 | 99.99th=[42206] 00:36:15.534 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=1975.16, stdev=79.15, samples=19 00:36:15.534 iops : min= 480, max= 544, avg=493.79, stdev=19.79, samples=19 00:36:15.534 lat (msec) : 20=1.11%, 50=98.89% 00:36:15.534 cpu : usr=99.04%, sys=0.68%, ctx=15, majf=0, minf=22 00:36:15.534 IO depths : 1=5.8%, 2=11.7%, 4=23.7%, 8=52.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:15.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 issued rwts: total=4947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.534 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.534 filename2: (groupid=0, jobs=1): err= 0: pid=3562294: Wed Nov 6 11:18:05 2024 00:36:15.534 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.3MiB/10010msec) 00:36:15.534 slat (nsec): min=5588, max=98107, avg=24855.89, stdev=15647.89 00:36:15.534 clat (usec): min=15638, max=52951, avg=32269.78, stdev=2050.90 00:36:15.534 lat (usec): min=15652, max=52962, avg=32294.63, stdev=2051.76 00:36:15.534 clat percentiles (usec): 00:36:15.534 | 1.00th=[23200], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:15.534 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:36:15.534 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:36:15.534 | 99.00th=[36963], 99.50th=[40633], 99.90th=[52691], 99.95th=[52691], 00:36:15.534 | 99.99th=[52691] 00:36:15.534 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1961.26, stdev=58.26, samples=19 00:36:15.534 iops : min= 480, max= 512, avg=490.32, stdev=14.56, samples=19 00:36:15.534 lat (msec) : 20=0.32%, 50=99.55%, 100=0.12% 00:36:15.534 cpu : usr=99.05%, sys=0.64%, ctx=19, majf=0, minf=24 00:36:15.534 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:15.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.534 issued rwts: total=4930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.534 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.534 00:36:15.534 Run status group 0 (all jobs): 00:36:15.534 READ: bw=46.2MiB/s (48.5MB/s), 1951KiB/s-2037KiB/s (1998kB/s-2086kB/s), io=464MiB (487MB), run=10001-10048msec 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:15.534 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 bdev_null0 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 [2024-11-06 11:18:05.839127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 bdev_null1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.535 { 00:36:15.535 "params": { 00:36:15.535 "name": "Nvme$subsystem", 00:36:15.535 "trtype": "$TEST_TRANSPORT", 00:36:15.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.535 "adrfam": "ipv4", 00:36:15.535 "trsvcid": "$NVMF_PORT", 00:36:15.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.535 "hdgst": ${hdgst:-false}, 00:36:15.535 "ddgst": ${ddgst:-false} 00:36:15.535 }, 00:36:15.535 "method": "bdev_nvme_attach_controller" 00:36:15.535 } 00:36:15.535 EOF 00:36:15.535 )") 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.535 { 00:36:15.535 "params": { 00:36:15.535 "name": "Nvme$subsystem", 00:36:15.535 "trtype": "$TEST_TRANSPORT", 00:36:15.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.535 "adrfam": "ipv4", 00:36:15.535 "trsvcid": "$NVMF_PORT", 00:36:15.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.535 "hdgst": ${hdgst:-false}, 00:36:15.535 "ddgst": ${ddgst:-false} 00:36:15.535 }, 00:36:15.535 "method": "bdev_nvme_attach_controller" 00:36:15.535 } 00:36:15.535 EOF 00:36:15.535 )") 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:15.535 11:18:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.535 "params": { 00:36:15.535 "name": "Nvme0", 00:36:15.535 "trtype": "tcp", 00:36:15.535 "traddr": "10.0.0.2", 00:36:15.535 "adrfam": "ipv4", 00:36:15.535 "trsvcid": "4420", 00:36:15.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.535 "hdgst": false, 00:36:15.535 "ddgst": false 00:36:15.535 }, 00:36:15.535 "method": "bdev_nvme_attach_controller" 00:36:15.535 },{ 00:36:15.535 "params": { 00:36:15.535 "name": "Nvme1", 00:36:15.535 "trtype": "tcp", 00:36:15.535 "traddr": "10.0.0.2", 00:36:15.535 "adrfam": "ipv4", 00:36:15.535 "trsvcid": "4420", 00:36:15.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.535 "hdgst": false, 00:36:15.535 "ddgst": false 00:36:15.535 }, 00:36:15.535 "method": "bdev_nvme_attach_controller" 00:36:15.536 }' 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:15.536 11:18:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.536 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:15.536 ... 00:36:15.536 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:15.536 ... 00:36:15.536 fio-3.35 00:36:15.536 Starting 4 threads 00:36:20.828 00:36:20.828 filename0: (groupid=0, jobs=1): err= 0: pid=3564607: Wed Nov 6 11:18:12 2024 00:36:20.828 read: IOPS=2137, BW=16.7MiB/s (17.5MB/s)(83.5MiB/5002msec) 00:36:20.828 slat (nsec): min=5392, max=62040, avg=8932.85, stdev=3727.67 00:36:20.828 clat (usec): min=2082, max=5985, avg=3723.98, stdev=322.95 00:36:20.828 lat (usec): min=2088, max=5993, avg=3732.92, stdev=322.82 00:36:20.828 clat percentiles (usec): 00:36:20.828 | 1.00th=[ 2966], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3556], 00:36:20.828 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:36:20.828 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3851], 95.00th=[ 4113], 00:36:20.828 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 5735], 99.95th=[ 5800], 00:36:20.828 | 99.99th=[ 5997] 00:36:20.828 bw ( KiB/s): min=16816, max=17312, per=25.53%, avg=17109.33, stdev=133.87, samples=9 00:36:20.828 iops : min= 2102, max= 2164, avg=2138.67, stdev=16.73, samples=9 00:36:20.828 lat (msec) : 4=92.95%, 10=7.05% 00:36:20.828 cpu : usr=96.70%, sys=3.02%, ctx=8, majf=0, minf=60 00:36:20.828 IO depths : 1=0.1%, 2=0.1%, 4=64.2%, 8=35.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 issued rwts: total=10690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.828 filename0: (groupid=0, jobs=1): err= 0: pid=3564608: Wed Nov 6 11:18:12 2024 00:36:20.828 read: IOPS=2069, BW=16.2MiB/s (17.0MB/s)(80.9MiB/5002msec) 00:36:20.828 slat (nsec): min=5404, max=52088, avg=8408.09, stdev=2325.99 00:36:20.828 clat (usec): min=1310, max=6381, avg=3841.98, stdev=539.12 00:36:20.828 lat (usec): min=1322, max=6389, avg=3850.38, stdev=539.00 00:36:20.828 clat percentiles (usec): 00:36:20.828 | 1.00th=[ 3032], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:36:20.828 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:36:20.828 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4178], 95.00th=[ 5407], 00:36:20.828 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[ 6259], 00:36:20.828 | 99.99th=[ 6390] 00:36:20.828 bw ( KiB/s): min=16336, max=16784, per=24.72%, avg=16561.70, stdev=157.49, samples=10 00:36:20.828 iops : min= 2042, max= 2098, avg=2070.20, stdev=19.67, samples=10 00:36:20.828 lat (msec) : 2=0.04%, 4=86.48%, 10=13.49% 00:36:20.828 cpu : usr=96.56%, sys=3.16%, ctx=8, majf=0, minf=39 00:36:20.828 IO depths : 1=0.1%, 2=0.1%, 4=73.5%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 issued rwts: total=10352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.828 filename1: (groupid=0, jobs=1): err= 0: pid=3564609: Wed Nov 6 11:18:12 2024 00:36:20.828 read: IOPS=2106, BW=16.5MiB/s (17.3MB/s)(82.3MiB/5003msec) 00:36:20.828 slat (nsec): min=5389, max=58193, avg=8161.07, stdev=3423.48 00:36:20.828 clat (usec): min=2265, max=6436, avg=3778.23, stdev=467.52 00:36:20.828 lat (usec): min=2270, max=6441, avg=3786.40, stdev=467.20 00:36:20.828 clat percentiles (usec): 00:36:20.828 | 1.00th=[ 2933], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3523], 00:36:20.828 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3785], 00:36:20.828 | 70.00th=[ 3818], 80.00th=[ 3818], 90.00th=[ 4015], 95.00th=[ 5145], 00:36:20.828 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6259], 99.95th=[ 6325], 00:36:20.828 | 99.99th=[ 6456] 00:36:20.828 bw ( KiB/s): min=16176, max=17184, per=25.16%, avg=16856.00, stdev=368.52, samples=10 00:36:20.828 iops : min= 2022, max= 2148, avg=2107.00, stdev=46.07, samples=10 00:36:20.828 lat (msec) : 4=89.80%, 10=10.20% 00:36:20.828 cpu : usr=96.52%, sys=3.22%, ctx=6, majf=0, minf=41 00:36:20.828 IO depths : 1=0.1%, 2=0.1%, 4=66.5%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 issued rwts: total=10540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.828 filename1: (groupid=0, jobs=1): err= 0: pid=3564610: Wed Nov 6 11:18:12 2024 00:36:20.828 read: IOPS=2063, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5002msec) 00:36:20.828 slat (nsec): min=5396, max=37220, avg=8044.61, stdev=2398.38 00:36:20.828 clat (usec): min=1878, max=45887, avg=3854.97, stdev=1286.20 00:36:20.828 lat (usec): min=1887, max=45912, avg=3863.01, stdev=1286.27 00:36:20.828 clat percentiles (usec): 00:36:20.828 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3556], 00:36:20.828 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:36:20.828 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4146], 95.00th=[ 5407], 00:36:20.828 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6259], 99.95th=[45876], 00:36:20.828 | 99.99th=[45876] 00:36:20.828 bw ( KiB/s): min=14992, max=17136, per=24.60%, avg=16480.00, stdev=626.92, samples=9 00:36:20.828 iops : min= 1874, max= 2142, avg=2060.00, stdev=78.36, samples=9 00:36:20.828 lat (msec) : 2=0.01%, 4=87.34%, 10=12.57%, 50=0.08% 00:36:20.828 cpu : usr=96.72%, sys=3.00%, ctx=6, majf=0, minf=48 00:36:20.828 IO depths : 1=0.1%, 2=0.4%, 4=72.9%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.828 issued rwts: total=10320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:20.828 00:36:20.828 Run status group 0 (all jobs): 00:36:20.828 READ: bw=65.4MiB/s (68.6MB/s), 16.1MiB/s-16.7MiB/s (16.9MB/s-17.5MB/s), io=327MiB (343MB), run=5002-5003msec 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.091 00:36:21.091 real 0m24.848s 00:36:21.091 user 5m15.462s 00:36:21.091 sys 0m4.251s 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:21.091 11:18:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.091 ************************************ 00:36:21.091 END TEST fio_dif_rand_params 00:36:21.091 ************************************ 00:36:21.091 11:18:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:21.091 11:18:12 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:21.091 11:18:12 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:21.091 11:18:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:21.091 ************************************ 00:36:21.091 START TEST fio_dif_digest 00:36:21.091 ************************************ 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:21.091 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.092 bdev_null0 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.092 [2024-11-06 11:18:12.446222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:21.092 { 00:36:21.092 "params": { 00:36:21.092 "name": "Nvme$subsystem", 00:36:21.092 "trtype": "$TEST_TRANSPORT", 00:36:21.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.092 "adrfam": "ipv4", 00:36:21.092 "trsvcid": "$NVMF_PORT", 00:36:21.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.092 "hdgst": ${hdgst:-false}, 00:36:21.092 "ddgst": ${ddgst:-false} 00:36:21.092 }, 00:36:21.092 "method": "bdev_nvme_attach_controller" 00:36:21.092 } 00:36:21.092 EOF 00:36:21.092 )") 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:21.092 "params": { 00:36:21.092 "name": "Nvme0", 00:36:21.092 "trtype": "tcp", 00:36:21.092 "traddr": "10.0.0.2", 00:36:21.092 "adrfam": "ipv4", 00:36:21.092 "trsvcid": "4420", 00:36:21.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.092 "hdgst": true, 00:36:21.092 "ddgst": true 00:36:21.092 }, 00:36:21.092 "method": "bdev_nvme_attach_controller" 00:36:21.092 }' 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:21.092 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:21.383 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:21.383 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:21.383 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:21.383 11:18:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.645 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:21.645 ... 00:36:21.645 fio-3.35 00:36:21.645 Starting 3 threads 00:36:33.868 00:36:33.868 filename0: (groupid=0, jobs=1): err= 0: pid=3566565: Wed Nov 6 11:18:23 2024 00:36:33.868 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(281MiB/10048msec) 00:36:33.868 slat (nsec): min=5771, max=30993, avg=6523.43, stdev=1003.93 00:36:33.868 clat (usec): min=7599, max=55204, avg=13392.93, stdev=3407.11 00:36:33.868 lat (usec): min=7606, max=55210, avg=13399.45, stdev=3407.12 00:36:33.868 clat percentiles (usec): 00:36:33.868 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11863], 20.00th=[12387], 00:36:33.868 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:36:33.868 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[15008], 00:36:33.868 | 99.00th=[16188], 99.50th=[52691], 99.90th=[54789], 99.95th=[55313], 00:36:33.868 | 99.99th=[55313] 00:36:33.868 bw ( KiB/s): min=25600, max=30208, per=34.56%, avg=28723.20, stdev=1298.99, samples=20 00:36:33.868 iops : min= 200, max= 236, avg=224.40, stdev=10.15, samples=20 00:36:33.868 lat (msec) : 10=3.74%, 20=95.64%, 100=0.62% 00:36:33.868 cpu : usr=94.88%, sys=4.90%, ctx=16, majf=0, minf=104 00:36:33.868 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.868 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.868 filename0: (groupid=0, jobs=1): err= 0: pid=3566566: Wed Nov 6 11:18:23 2024 00:36:33.868 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(276MiB/10045msec) 00:36:33.868 slat (nsec): min=5784, max=31589, avg=7630.85, stdev=1640.31 00:36:33.868 clat (usec): min=8491, max=55171, avg=13644.05, stdev=2723.24 00:36:33.868 lat (usec): min=8497, max=55177, avg=13651.68, stdev=2723.23 00:36:33.868 clat percentiles (usec): 00:36:33.868 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11994], 20.00th=[12649], 00:36:33.868 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:36:33.868 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:36:33.868 | 99.00th=[16319], 99.50th=[16909], 99.90th=[54264], 99.95th=[54789], 00:36:33.868 | 99.99th=[55313] 00:36:33.868 bw ( KiB/s): min=25856, max=29440, per=33.91%, avg=28185.60, stdev=898.02, samples=20 00:36:33.868 iops : min= 202, max= 230, avg=220.20, stdev= 7.02, samples=20 00:36:33.868 lat (msec) : 10=2.54%, 20=97.10%, 50=0.05%, 100=0.32% 00:36:33.868 cpu : usr=95.21%, sys=4.56%, ctx=16, majf=0, minf=124 00:36:33.868 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.868 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.868 filename0: (groupid=0, jobs=1): err= 0: pid=3566567: Wed Nov 6 11:18:23 2024 00:36:33.868 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(259MiB/10004msec) 00:36:33.868 slat (nsec): min=5761, max=31289, avg=7483.24, stdev=1638.10 00:36:33.868 clat (usec): min=5738, max=57541, avg=14460.62, stdev=4563.91 00:36:33.868 lat (usec): min=5744, max=57547, avg=14468.10, stdev=4563.94 00:36:33.868 clat percentiles (usec): 00:36:33.868 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12518], 20.00th=[13173], 00:36:33.868 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:36:33.868 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:36:33.868 | 99.00th=[53216], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:36:33.868 | 99.99th=[57410] 00:36:33.868 bw ( KiB/s): min=24576, max=28928, per=32.00%, avg=26597.05, stdev=1310.62, samples=19 00:36:33.868 iops : min= 192, max= 226, avg=207.79, stdev=10.24, samples=19 00:36:33.868 lat (msec) : 10=1.78%, 20=97.06%, 100=1.16% 00:36:33.868 cpu : usr=95.36%, sys=4.42%, ctx=19, majf=0, minf=141 00:36:33.868 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.868 issued rwts: total=2074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.868 00:36:33.868 Run status group 0 (all jobs): 00:36:33.868 READ: bw=81.2MiB/s (85.1MB/s), 25.9MiB/s-27.9MiB/s (27.2MB/s-29.3MB/s), io=816MiB (855MB), run=10004-10048msec 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.868 00:36:33.868 real 0m11.311s 00:36:33.868 user 0m41.892s 00:36:33.868 sys 0m1.697s 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:33.868 11:18:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.868 ************************************ 00:36:33.868 END TEST fio_dif_digest 00:36:33.868 ************************************ 00:36:33.868 11:18:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:33.868 11:18:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.868 rmmod nvme_tcp 00:36:33.868 rmmod nvme_fabrics 00:36:33.868 rmmod nvme_keyring 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3555529 ']' 00:36:33.868 11:18:23 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3555529 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3555529 ']' 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3555529 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3555529 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:33.868 11:18:23 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3555529' 00:36:33.868 killing process with pid 3555529 00:36:33.869 11:18:23 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3555529 00:36:33.869 11:18:23 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3555529 00:36:33.869 11:18:24 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:33.869 11:18:24 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:35.780 Waiting for block devices as requested 00:36:35.780 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:36.042 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:36.042 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:36.042 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:36.303 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:36.303 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:36.303 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:36.597 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:36.597 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:36.597 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:36.883 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:36.883 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:36.883 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:36.883 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:37.185 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:37.185 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:37.185 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:37.493 11:18:28 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.493 11:18:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:37.493 11:18:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.039 11:18:30 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.039 00:36:40.039 real 1m18.156s 00:36:40.039 user 8m0.813s 00:36:40.039 sys 0m20.952s 00:36:40.039 11:18:30 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:40.039 11:18:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.039 ************************************ 00:36:40.039 END TEST nvmf_dif 00:36:40.039 ************************************ 00:36:40.039 11:18:30 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:40.039 11:18:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:40.039 11:18:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:40.039 11:18:30 -- common/autotest_common.sh@10 -- # set +x 00:36:40.039 ************************************ 00:36:40.039 START TEST nvmf_abort_qd_sizes 00:36:40.039 ************************************ 00:36:40.039 11:18:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:40.039 * Looking for test storage... 00:36:40.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:40.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.039 --rc genhtml_branch_coverage=1 00:36:40.039 --rc genhtml_function_coverage=1 00:36:40.039 --rc genhtml_legend=1 00:36:40.039 --rc geninfo_all_blocks=1 00:36:40.039 --rc geninfo_unexecuted_blocks=1 00:36:40.039 00:36:40.039 ' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:40.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.039 --rc genhtml_branch_coverage=1 00:36:40.039 --rc genhtml_function_coverage=1 00:36:40.039 --rc genhtml_legend=1 00:36:40.039 --rc geninfo_all_blocks=1 00:36:40.039 --rc geninfo_unexecuted_blocks=1 00:36:40.039 00:36:40.039 ' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:40.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.039 --rc genhtml_branch_coverage=1 00:36:40.039 --rc genhtml_function_coverage=1 00:36:40.039 --rc genhtml_legend=1 00:36:40.039 --rc geninfo_all_blocks=1 00:36:40.039 --rc geninfo_unexecuted_blocks=1 00:36:40.039 00:36:40.039 ' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:40.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.039 --rc genhtml_branch_coverage=1 00:36:40.039 --rc genhtml_function_coverage=1 00:36:40.039 --rc genhtml_legend=1 00:36:40.039 --rc geninfo_all_blocks=1 00:36:40.039 --rc geninfo_unexecuted_blocks=1 00:36:40.039 00:36:40.039 ' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.039 11:18:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:40.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:40.040 11:18:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:48.183 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:48.183 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:48.183 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:48.184 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:48.184 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:48.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:48.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:36:48.184 00:36:48.184 --- 10.0.0.2 ping statistics --- 00:36:48.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.184 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:48.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:48.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:36:48.184 00:36:48.184 --- 10.0.0.1 ping statistics --- 00:36:48.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.184 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:48.184 11:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:50.730 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:50.730 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:50.990 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:50.990 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3576008 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3576008 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3576008 ']' 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:51.251 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.251 [2024-11-06 11:18:42.603594] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:36:51.251 [2024-11-06 11:18:42.603631] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.251 [2024-11-06 11:18:42.670811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.511 [2024-11-06 11:18:42.708634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.511 [2024-11-06 11:18:42.708666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.511 [2024-11-06 11:18:42.708674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.511 [2024-11-06 11:18:42.708681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.511 [2024-11-06 11:18:42.708687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.511 [2024-11-06 11:18:42.710435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.511 [2024-11-06 11:18:42.710549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.511 [2024-11-06 11:18:42.710697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.511 [2024-11-06 11:18:42.710698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:51.511 11:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.511 ************************************ 00:36:51.511 START TEST spdk_target_abort 00:36:51.511 ************************************ 00:36:51.511 11:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:36:51.511 11:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:51.511 11:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:51.511 11:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.511 11:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.082 spdk_targetn1 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.082 [2024-11-06 11:18:43.208626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.082 [2024-11-06 11:18:43.256967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.082 11:18:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.082 [2024-11-06 11:18:43.464239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:624 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:52.082 [2024-11-06 11:18:43.464268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0050 p:1 m:0 dnr:0 00:36:52.082 [2024-11-06 11:18:43.464546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:640 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:52.082 [2024-11-06 11:18:43.464556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0051 p:1 m:0 dnr:0 00:36:52.342 [2024-11-06 11:18:43.510705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2384 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:52.342 [2024-11-06 11:18:43.510723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:52.342 [2024-11-06 11:18:43.534259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3096 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:52.342 [2024-11-06 11:18:43.534275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0085 p:0 m:0 dnr:0 00:36:52.342 [2024-11-06 11:18:43.542129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3384 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:52.342 [2024-11-06 11:18:43.542144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:36:52.342 [2024-11-06 11:18:43.542990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3440 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:52.342 [2024-11-06 11:18:43.543002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00af p:0 m:0 dnr:0 00:36:52.342 [2024-11-06 11:18:43.556317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3872 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:52.342 [2024-11-06 11:18:43.556332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e8 p:0 m:0 dnr:0 00:36:52.342 [2024-11-06 11:18:43.556418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3896 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:52.342 [2024-11-06 11:18:43.556428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e8 p:0 m:0 dnr:0 00:36:55.644 Initializing NVMe Controllers 00:36:55.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.644 Initialization complete. Launching workers. 00:36:55.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11492, failed: 8 00:36:55.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3315, failed to submit 8185 00:36:55.644 success 706, unsuccessful 2609, failed 0 00:36:55.644 11:18:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:55.644 11:18:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.644 [2024-11-06 11:18:46.624995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:36:55.644 [2024-11-06 11:18:46.625035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:36:55.644 [2024-11-06 11:18:46.640904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:832 len:8 PRP1 0x200004e54000 PRP2 0x0 00:36:55.644 [2024-11-06 11:18:46.640928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:36:55.644 [2024-11-06 11:18:46.656946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:1176 len:8 PRP1 0x200004e40000 PRP2 0x0 00:36:55.644 [2024-11-06 11:18:46.656968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:36:55.644 [2024-11-06 11:18:46.719834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2544 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:36:55.644 [2024-11-06 11:18:46.719861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:55.644 [2024-11-06 11:18:46.735771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3000 len:8 PRP1 0x200004e46000 PRP2 0x0 00:36:55.644 [2024-11-06 11:18:46.735793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:55.644 [2024-11-06 11:18:46.751908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:3384 len:8 PRP1 0x200004e42000 PRP2 0x0 00:36:55.644 [2024-11-06 11:18:46.751931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:36:58.943 Initializing NVMe Controllers 00:36:58.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.943 Initialization complete. Launching workers. 00:36:58.943 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8533, failed: 6 00:36:58.943 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1194, failed to submit 7345 00:36:58.943 success 358, unsuccessful 836, failed 0 00:36:58.943 11:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:58.943 11:18:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:58.943 [2024-11-06 11:18:49.917161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:182 nsid:1 lba:3696 len:8 PRP1 0x200004b12000 PRP2 0x0 00:36:58.943 [2024-11-06 11:18:49.917190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:182 cdw0:0 sqhd:00bb p:1 m:0 dnr:0 00:37:00.854 [2024-11-06 11:18:52.072101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:143 nsid:1 lba:248992 len:8 PRP1 0x200004ad4000 PRP2 0x0 00:37:00.854 [2024-11-06 11:18:52.072131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:143 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:37:01.795 Initializing NVMe Controllers 00:37:01.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:01.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:01.795 Initialization complete. Launching workers. 00:37:01.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42902, failed: 2 00:37:01.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2478, failed to submit 40426 00:37:01.795 success 599, unsuccessful 1879, failed 0 00:37:01.795 11:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:01.795 11:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.795 11:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.795 11:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.795 11:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:01.795 11:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.795 11:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.704 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3576008 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3576008 ']' 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3576008 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3576008 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3576008' 00:37:03.705 killing process with pid 3576008 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3576008 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3576008 00:37:03.705 00:37:03.705 real 0m12.064s 00:37:03.705 user 0m47.032s 00:37:03.705 sys 0m1.749s 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:03.705 11:18:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.705 ************************************ 00:37:03.705 END TEST spdk_target_abort 00:37:03.705 ************************************ 00:37:03.705 11:18:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:03.705 11:18:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:03.705 11:18:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:03.705 11:18:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:03.705 ************************************ 00:37:03.705 START TEST kernel_target_abort 00:37:03.705 ************************************ 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:03.705 11:18:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:07.002 Waiting for block devices as requested 00:37:07.002 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:07.263 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:07.263 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:07.263 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:07.263 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:07.524 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:07.524 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:07.524 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:07.524 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:07.783 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:07.783 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:08.043 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:08.043 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:08.043 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:08.302 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:08.302 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:08.303 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:08.563 No valid GPT data, bailing 00:37:08.563 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:08.823 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:08.823 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:08.823 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:08.823 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:08.823 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:08.823 11:18:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:08.823 00:37:08.823 Discovery Log Number of Records 2, Generation counter 2 00:37:08.823 =====Discovery Log Entry 0====== 00:37:08.823 trtype: tcp 00:37:08.823 adrfam: ipv4 00:37:08.823 subtype: current discovery subsystem 00:37:08.823 treq: not specified, sq flow control disable supported 00:37:08.823 portid: 1 00:37:08.823 trsvcid: 4420 00:37:08.823 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:08.823 traddr: 10.0.0.1 00:37:08.823 eflags: none 00:37:08.823 sectype: none 00:37:08.823 =====Discovery Log Entry 1====== 00:37:08.823 trtype: tcp 00:37:08.823 adrfam: ipv4 00:37:08.823 subtype: nvme subsystem 00:37:08.823 treq: not specified, sq flow control disable supported 00:37:08.823 portid: 1 00:37:08.823 trsvcid: 4420 00:37:08.823 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:08.823 traddr: 10.0.0.1 00:37:08.823 eflags: none 00:37:08.823 sectype: none 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:08.823 11:19:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:12.121 Initializing NVMe Controllers 00:37:12.121 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:12.121 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:12.121 Initialization complete. Launching workers. 00:37:12.121 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67196, failed: 0 00:37:12.121 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67196, failed to submit 0 00:37:12.121 success 0, unsuccessful 67196, failed 0 00:37:12.121 11:19:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:12.121 11:19:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:15.421 Initializing NVMe Controllers 00:37:15.421 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:15.421 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:15.421 Initialization complete. Launching workers. 00:37:15.421 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107862, failed: 0 00:37:15.421 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27150, failed to submit 80712 00:37:15.421 success 0, unsuccessful 27150, failed 0 00:37:15.421 11:19:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:15.421 11:19:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:18.720 Initializing NVMe Controllers 00:37:18.720 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:18.720 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:18.720 Initialization complete. Launching workers. 00:37:18.720 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101909, failed: 0 00:37:18.720 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25490, failed to submit 76419 00:37:18.720 success 0, unsuccessful 25490, failed 0 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:18.720 11:19:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:21.268 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:21.268 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:23.179 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:23.440 00:37:23.440 real 0m19.665s 00:37:23.440 user 0m9.706s 00:37:23.440 sys 0m5.697s 00:37:23.440 11:19:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:23.440 11:19:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:23.440 ************************************ 00:37:23.440 END TEST kernel_target_abort 00:37:23.440 ************************************ 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:23.440 rmmod nvme_tcp 00:37:23.440 rmmod nvme_fabrics 00:37:23.440 rmmod nvme_keyring 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3576008 ']' 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3576008 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3576008 ']' 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3576008 00:37:23.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3576008) - No such process 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3576008 is not found' 00:37:23.440 Process with pid 3576008 is not found 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:23.440 11:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:26.856 Waiting for block devices as requested 00:37:26.856 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:26.856 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:26.856 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:27.117 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:27.117 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:27.117 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:27.117 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:27.378 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:27.378 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:27.638 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:27.638 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:27.638 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:27.897 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:27.897 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:27.897 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:27.897 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:28.157 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:28.416 11:19:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.957 11:19:21 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.957 00:37:30.957 real 0m50.829s 00:37:30.957 user 1m2.084s 00:37:30.957 sys 0m18.324s 00:37:30.957 11:19:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:30.957 11:19:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:30.957 ************************************ 00:37:30.957 END TEST nvmf_abort_qd_sizes 00:37:30.957 ************************************ 00:37:30.957 11:19:21 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:30.957 11:19:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:30.957 11:19:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:30.957 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:37:30.957 ************************************ 00:37:30.957 START TEST keyring_file 00:37:30.957 ************************************ 00:37:30.957 11:19:21 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:30.957 * Looking for test storage... 00:37:30.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:30.957 11:19:21 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:30.957 11:19:21 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:30.957 11:19:21 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:30.957 11:19:22 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:30.957 11:19:22 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.957 11:19:22 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:30.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.957 --rc genhtml_branch_coverage=1 00:37:30.957 --rc genhtml_function_coverage=1 00:37:30.957 --rc genhtml_legend=1 00:37:30.957 --rc geninfo_all_blocks=1 00:37:30.957 --rc geninfo_unexecuted_blocks=1 00:37:30.957 00:37:30.957 ' 00:37:30.957 11:19:22 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:30.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.957 --rc genhtml_branch_coverage=1 00:37:30.957 --rc genhtml_function_coverage=1 00:37:30.957 --rc genhtml_legend=1 00:37:30.957 --rc geninfo_all_blocks=1 00:37:30.957 --rc geninfo_unexecuted_blocks=1 00:37:30.957 00:37:30.957 ' 00:37:30.957 11:19:22 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:30.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.957 --rc genhtml_branch_coverage=1 00:37:30.957 --rc genhtml_function_coverage=1 00:37:30.957 --rc genhtml_legend=1 00:37:30.957 --rc geninfo_all_blocks=1 00:37:30.957 --rc geninfo_unexecuted_blocks=1 00:37:30.957 00:37:30.957 ' 00:37:30.957 11:19:22 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:30.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.957 --rc genhtml_branch_coverage=1 00:37:30.957 --rc genhtml_function_coverage=1 00:37:30.957 --rc genhtml_legend=1 00:37:30.957 --rc geninfo_all_blocks=1 00:37:30.957 --rc geninfo_unexecuted_blocks=1 00:37:30.957 00:37:30.957 ' 00:37:30.957 11:19:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:30.957 11:19:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.957 11:19:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.957 11:19:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.957 11:19:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.958 11:19:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.958 11:19:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.958 11:19:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:30.958 11:19:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.s7cQP1MfWW 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.s7cQP1MfWW 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.s7cQP1MfWW 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.s7cQP1MfWW 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UOs9WcATIA 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:30.958 11:19:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UOs9WcATIA 00:37:30.958 11:19:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UOs9WcATIA 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.UOs9WcATIA 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=3586087 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3586087 00:37:30.958 11:19:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:30.958 11:19:22 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3586087 ']' 00:37:30.958 11:19:22 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.958 11:19:22 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:30.958 11:19:22 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.958 11:19:22 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:30.958 11:19:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.958 [2024-11-06 11:19:22.237996] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:37:30.958 [2024-11-06 11:19:22.238053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586087 ] 00:37:30.958 [2024-11-06 11:19:22.309035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.958 [2024-11-06 11:19:22.345313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.897 11:19:23 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:31.898 11:19:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:31.898 [2024-11-06 11:19:23.034377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:31.898 null0 00:37:31.898 [2024-11-06 11:19:23.066424] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:31.898 [2024-11-06 11:19:23.066665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.898 11:19:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:31.898 [2024-11-06 11:19:23.094482] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:31.898 request: 00:37:31.898 { 00:37:31.898 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.898 "secure_channel": false, 00:37:31.898 "listen_address": { 00:37:31.898 "trtype": "tcp", 00:37:31.898 "traddr": "127.0.0.1", 00:37:31.898 "trsvcid": "4420" 00:37:31.898 }, 00:37:31.898 "method": "nvmf_subsystem_add_listener", 00:37:31.898 "req_id": 1 00:37:31.898 } 00:37:31.898 Got JSON-RPC error response 00:37:31.898 response: 00:37:31.898 { 00:37:31.898 "code": -32602, 00:37:31.898 "message": "Invalid parameters" 00:37:31.898 } 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:31.898 11:19:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=3586217 00:37:31.898 11:19:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3586217 /var/tmp/bperf.sock 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3586217 ']' 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:31.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:31.898 11:19:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:31.898 11:19:23 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:31.898 [2024-11-06 11:19:23.157625] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:37:31.898 [2024-11-06 11:19:23.157671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586217 ] 00:37:31.898 [2024-11-06 11:19:23.243232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.898 [2024-11-06 11:19:23.279157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.839 11:19:23 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:32.839 11:19:23 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:32.839 11:19:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:32.839 11:19:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:32.839 11:19:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UOs9WcATIA 00:37:32.839 11:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UOs9WcATIA 00:37:33.098 11:19:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:33.098 11:19:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:33.098 11:19:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.098 11:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.098 11:19:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.098 11:19:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.s7cQP1MfWW == \/\t\m\p\/\t\m\p\.\s\7\c\Q\P\1\M\f\W\W ]] 00:37:33.098 11:19:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:33.098 11:19:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:33.098 11:19:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.098 11:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.098 11:19:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:33.357 11:19:24 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.UOs9WcATIA == \/\t\m\p\/\t\m\p\.\U\O\s\9\W\c\A\T\I\A ]] 00:37:33.357 11:19:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:33.357 11:19:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.357 11:19:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.357 11:19:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.358 11:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.358 11:19:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.617 11:19:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:33.617 11:19:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:33.617 11:19:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:33.617 11:19:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.617 11:19:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.617 11:19:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:33.617 11:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.617 11:19:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:33.617 11:19:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.617 11:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.877 [2024-11-06 11:19:25.185592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:33.877 nvme0n1 00:37:33.877 11:19:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:33.877 11:19:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.877 11:19:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.877 11:19:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.877 11:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.877 11:19:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.137 11:19:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:34.137 11:19:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:34.137 11:19:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.137 11:19:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.137 11:19:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.137 11:19:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.137 11:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.397 11:19:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:34.397 11:19:25 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:34.397 Running I/O for 1 seconds... 00:37:35.336 15383.00 IOPS, 60.09 MiB/s 00:37:35.336 Latency(us) 00:37:35.336 [2024-11-06T10:19:26.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.336 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:35.336 nvme0n1 : 1.01 15398.94 60.15 0.00 0.00 8279.97 3631.79 12834.13 00:37:35.336 [2024-11-06T10:19:26.758Z] =================================================================================================================== 00:37:35.336 [2024-11-06T10:19:26.758Z] Total : 15398.94 60.15 0.00 0.00 8279.97 3631.79 12834.13 00:37:35.336 { 00:37:35.336 "results": [ 00:37:35.336 { 00:37:35.336 "job": "nvme0n1", 00:37:35.336 "core_mask": "0x2", 00:37:35.336 "workload": "randrw", 00:37:35.336 "percentage": 50, 00:37:35.336 "status": "finished", 00:37:35.336 "queue_depth": 128, 00:37:35.336 "io_size": 4096, 00:37:35.336 "runtime": 1.007277, 00:37:35.336 "iops": 15398.94189979519, 00:37:35.336 "mibps": 60.152116796074964, 00:37:35.336 "io_failed": 0, 00:37:35.336 "io_timeout": 0, 00:37:35.336 "avg_latency_us": 8279.96633786775, 00:37:35.336 "min_latency_us": 3631.786666666667, 00:37:35.336 "max_latency_us": 12834.133333333333 00:37:35.336 } 00:37:35.336 ], 00:37:35.336 "core_count": 1 00:37:35.336 } 00:37:35.336 11:19:26 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:35.336 11:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:35.596 11:19:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:35.596 11:19:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:35.596 11:19:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.596 11:19:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.596 11:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.596 11:19:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:35.855 11:19:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:35.855 11:19:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:35.855 11:19:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:35.855 11:19:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.855 11:19:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.855 11:19:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:35.855 11:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.855 11:19:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:35.855 11:19:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:35.855 11:19:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:35.856 11:19:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:35.856 11:19:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:36.116 11:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:36.116 [2024-11-06 11:19:27.439963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:36.116 [2024-11-06 11:19:27.440117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca4c10 (107): Transport endpoint is not connected 00:37:36.116 [2024-11-06 11:19:27.441112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca4c10 (9): Bad file descriptor 00:37:36.116 [2024-11-06 11:19:27.442114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:36.116 [2024-11-06 11:19:27.442121] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:36.116 [2024-11-06 11:19:27.442128] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:36.116 [2024-11-06 11:19:27.442134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:36.116 request: 00:37:36.116 { 00:37:36.116 "name": "nvme0", 00:37:36.116 "trtype": "tcp", 00:37:36.116 "traddr": "127.0.0.1", 00:37:36.116 "adrfam": "ipv4", 00:37:36.116 "trsvcid": "4420", 00:37:36.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.116 "prchk_reftag": false, 00:37:36.116 "prchk_guard": false, 00:37:36.116 "hdgst": false, 00:37:36.116 "ddgst": false, 00:37:36.116 "psk": "key1", 00:37:36.116 "allow_unrecognized_csi": false, 00:37:36.116 "method": "bdev_nvme_attach_controller", 00:37:36.116 "req_id": 1 00:37:36.116 } 00:37:36.116 Got JSON-RPC error response 00:37:36.116 response: 00:37:36.116 { 00:37:36.116 "code": -5, 00:37:36.116 "message": "Input/output error" 00:37:36.116 } 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:36.116 11:19:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:36.116 11:19:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:36.116 11:19:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.116 11:19:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.116 11:19:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.116 11:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.116 11:19:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.376 11:19:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:36.376 11:19:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:36.376 11:19:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:36.376 11:19:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.376 11:19:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.376 11:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.376 11:19:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:36.637 11:19:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:36.637 11:19:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:36.637 11:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:36.637 11:19:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:36.637 11:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:36.898 11:19:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:36.898 11:19:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:36.898 11:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.157 11:19:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:37.157 11:19:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.s7cQP1MfWW 00:37:37.157 11:19:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:37.157 11:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:37.157 [2024-11-06 11:19:28.508841] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.s7cQP1MfWW': 0100660 00:37:37.157 [2024-11-06 11:19:28.508860] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:37.157 request: 00:37:37.157 { 00:37:37.157 "name": "key0", 00:37:37.157 "path": "/tmp/tmp.s7cQP1MfWW", 00:37:37.157 "method": "keyring_file_add_key", 00:37:37.157 "req_id": 1 00:37:37.157 } 00:37:37.157 Got JSON-RPC error response 00:37:37.157 response: 00:37:37.157 { 00:37:37.157 "code": -1, 00:37:37.157 "message": "Operation not permitted" 00:37:37.157 } 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:37.157 11:19:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:37.157 11:19:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.s7cQP1MfWW 00:37:37.157 11:19:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:37.157 11:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.s7cQP1MfWW 00:37:37.417 11:19:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.s7cQP1MfWW 00:37:37.417 11:19:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:37.417 11:19:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:37.417 11:19:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.417 11:19:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.417 11:19:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.417 11:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.677 11:19:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:37.677 11:19:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.677 11:19:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:37.677 11:19:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.677 11:19:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:37.677 11:19:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.677 11:19:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:37.677 11:19:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.677 11:19:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.677 11:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:37.677 [2024-11-06 11:19:29.030165] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.s7cQP1MfWW': No such file or directory 00:37:37.677 [2024-11-06 11:19:29.030180] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:37.677 [2024-11-06 11:19:29.030198] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:37.677 [2024-11-06 11:19:29.030204] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:37.677 [2024-11-06 11:19:29.030210] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:37.677 [2024-11-06 11:19:29.030215] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:37.677 request: 00:37:37.677 { 00:37:37.677 "name": "nvme0", 00:37:37.677 "trtype": "tcp", 00:37:37.677 "traddr": "127.0.0.1", 00:37:37.677 "adrfam": "ipv4", 00:37:37.677 "trsvcid": "4420", 00:37:37.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:37.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:37.677 "prchk_reftag": false, 00:37:37.677 "prchk_guard": false, 00:37:37.677 "hdgst": false, 00:37:37.677 "ddgst": false, 00:37:37.677 "psk": "key0", 00:37:37.677 "allow_unrecognized_csi": false, 00:37:37.677 "method": "bdev_nvme_attach_controller", 00:37:37.677 "req_id": 1 00:37:37.677 } 00:37:37.677 Got JSON-RPC error response 00:37:37.677 response: 00:37:37.677 { 00:37:37.677 "code": -19, 00:37:37.677 "message": "No such device" 00:37:37.677 } 00:37:37.677 11:19:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:37.677 11:19:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:37.677 11:19:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:37.677 11:19:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:37.677 11:19:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:37.677 11:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:37.938 11:19:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.93m3pjtjSt 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:37.938 11:19:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:37.938 11:19:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:37.938 11:19:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:37.938 11:19:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:37.938 11:19:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:37.938 11:19:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.93m3pjtjSt 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.93m3pjtjSt 00:37:37.938 11:19:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.93m3pjtjSt 00:37:37.938 11:19:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.93m3pjtjSt 00:37:37.938 11:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.93m3pjtjSt 00:37:38.198 11:19:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:38.198 11:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:38.461 nvme0n1 00:37:38.461 11:19:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:38.461 11:19:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:38.461 11:19:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.461 11:19:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.461 11:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.461 11:19:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:38.461 11:19:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:38.461 11:19:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:38.461 11:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:38.723 11:19:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:38.723 11:19:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:38.723 11:19:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.723 11:19:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:38.723 11:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.983 11:19:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:38.983 11:19:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:38.983 11:19:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:38.983 11:19:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.983 11:19:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.983 11:19:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:38.983 11:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.983 11:19:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:38.983 11:19:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:38.983 11:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:39.243 11:19:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:39.243 11:19:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:39.244 11:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.503 11:19:30 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:39.504 11:19:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.93m3pjtjSt 00:37:39.504 11:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.93m3pjtjSt 00:37:39.504 11:19:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UOs9WcATIA 00:37:39.504 11:19:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UOs9WcATIA 00:37:39.763 11:19:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.763 11:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.023 nvme0n1 00:37:40.023 11:19:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:40.023 11:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:40.284 11:19:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:40.284 "subsystems": [ 00:37:40.284 { 00:37:40.284 "subsystem": "keyring", 00:37:40.284 "config": [ 00:37:40.284 { 00:37:40.284 "method": "keyring_file_add_key", 00:37:40.284 "params": { 00:37:40.284 "name": "key0", 00:37:40.284 "path": "/tmp/tmp.93m3pjtjSt" 00:37:40.284 } 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "method": "keyring_file_add_key", 00:37:40.284 "params": { 00:37:40.284 "name": "key1", 00:37:40.284 "path": "/tmp/tmp.UOs9WcATIA" 00:37:40.284 } 00:37:40.284 } 00:37:40.284 ] 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "subsystem": "iobuf", 00:37:40.284 "config": [ 00:37:40.284 { 00:37:40.284 "method": "iobuf_set_options", 00:37:40.284 "params": { 00:37:40.284 "small_pool_count": 8192, 00:37:40.284 "large_pool_count": 1024, 00:37:40.284 "small_bufsize": 8192, 00:37:40.284 "large_bufsize": 135168, 00:37:40.284 "enable_numa": false 00:37:40.284 } 00:37:40.284 } 00:37:40.284 ] 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "subsystem": "sock", 00:37:40.284 "config": [ 00:37:40.284 { 00:37:40.284 "method": "sock_set_default_impl", 00:37:40.284 "params": { 00:37:40.284 "impl_name": "posix" 00:37:40.284 } 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "method": "sock_impl_set_options", 00:37:40.284 "params": { 00:37:40.284 "impl_name": "ssl", 00:37:40.284 "recv_buf_size": 4096, 00:37:40.284 "send_buf_size": 4096, 00:37:40.284 "enable_recv_pipe": true, 00:37:40.284 "enable_quickack": false, 00:37:40.284 "enable_placement_id": 0, 00:37:40.284 "enable_zerocopy_send_server": true, 00:37:40.284 "enable_zerocopy_send_client": false, 00:37:40.284 "zerocopy_threshold": 0, 00:37:40.284 "tls_version": 0, 00:37:40.284 "enable_ktls": false 00:37:40.284 } 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "method": "sock_impl_set_options", 00:37:40.284 "params": { 00:37:40.284 "impl_name": "posix", 00:37:40.284 "recv_buf_size": 2097152, 00:37:40.284 "send_buf_size": 2097152, 00:37:40.284 "enable_recv_pipe": true, 00:37:40.284 "enable_quickack": false, 00:37:40.284 "enable_placement_id": 0, 00:37:40.284 "enable_zerocopy_send_server": true, 00:37:40.284 "enable_zerocopy_send_client": false, 00:37:40.284 "zerocopy_threshold": 0, 00:37:40.284 "tls_version": 0, 00:37:40.284 "enable_ktls": false 00:37:40.284 } 00:37:40.284 } 00:37:40.284 ] 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "subsystem": "vmd", 00:37:40.284 "config": [] 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "subsystem": "accel", 00:37:40.284 "config": [ 00:37:40.284 { 00:37:40.284 "method": "accel_set_options", 00:37:40.284 "params": { 00:37:40.284 "small_cache_size": 128, 00:37:40.284 "large_cache_size": 16, 00:37:40.284 "task_count": 2048, 00:37:40.284 "sequence_count": 2048, 00:37:40.284 "buf_count": 2048 00:37:40.284 } 00:37:40.284 } 00:37:40.284 ] 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "subsystem": "bdev", 00:37:40.284 "config": [ 00:37:40.284 { 00:37:40.284 "method": "bdev_set_options", 00:37:40.284 "params": { 00:37:40.284 "bdev_io_pool_size": 65535, 00:37:40.284 "bdev_io_cache_size": 256, 00:37:40.284 "bdev_auto_examine": true, 00:37:40.284 "iobuf_small_cache_size": 128, 00:37:40.284 "iobuf_large_cache_size": 16 00:37:40.284 } 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "method": "bdev_raid_set_options", 00:37:40.284 "params": { 00:37:40.284 "process_window_size_kb": 1024, 00:37:40.284 "process_max_bandwidth_mb_sec": 0 00:37:40.284 } 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "method": "bdev_iscsi_set_options", 00:37:40.284 "params": { 00:37:40.284 "timeout_sec": 30 00:37:40.284 } 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "method": "bdev_nvme_set_options", 00:37:40.284 "params": { 00:37:40.284 "action_on_timeout": "none", 00:37:40.284 "timeout_us": 0, 00:37:40.284 "timeout_admin_us": 0, 00:37:40.284 "keep_alive_timeout_ms": 10000, 00:37:40.284 "arbitration_burst": 0, 00:37:40.284 "low_priority_weight": 0, 00:37:40.284 "medium_priority_weight": 0, 00:37:40.284 "high_priority_weight": 0, 00:37:40.284 "nvme_adminq_poll_period_us": 10000, 00:37:40.284 "nvme_ioq_poll_period_us": 0, 00:37:40.284 "io_queue_requests": 512, 00:37:40.284 "delay_cmd_submit": true, 00:37:40.284 "transport_retry_count": 4, 00:37:40.284 "bdev_retry_count": 3, 00:37:40.284 "transport_ack_timeout": 0, 00:37:40.284 "ctrlr_loss_timeout_sec": 0, 00:37:40.284 "reconnect_delay_sec": 0, 00:37:40.284 "fast_io_fail_timeout_sec": 0, 00:37:40.284 "disable_auto_failback": false, 00:37:40.284 "generate_uuids": false, 00:37:40.284 "transport_tos": 0, 00:37:40.284 "nvme_error_stat": false, 00:37:40.284 "rdma_srq_size": 0, 00:37:40.284 "io_path_stat": false, 00:37:40.284 "allow_accel_sequence": false, 00:37:40.284 "rdma_max_cq_size": 0, 00:37:40.284 "rdma_cm_event_timeout_ms": 0, 00:37:40.284 "dhchap_digests": [ 00:37:40.284 "sha256", 00:37:40.284 "sha384", 00:37:40.284 "sha512" 00:37:40.284 ], 00:37:40.284 "dhchap_dhgroups": [ 00:37:40.284 "null", 00:37:40.284 "ffdhe2048", 00:37:40.284 "ffdhe3072", 00:37:40.284 "ffdhe4096", 00:37:40.284 "ffdhe6144", 00:37:40.284 "ffdhe8192" 00:37:40.284 ] 00:37:40.284 } 00:37:40.284 }, 00:37:40.284 { 00:37:40.284 "method": "bdev_nvme_attach_controller", 00:37:40.284 "params": { 00:37:40.284 "name": "nvme0", 00:37:40.284 "trtype": "TCP", 00:37:40.284 "adrfam": "IPv4", 00:37:40.284 "traddr": "127.0.0.1", 00:37:40.284 "trsvcid": "4420", 00:37:40.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.284 "prchk_reftag": false, 00:37:40.284 "prchk_guard": false, 00:37:40.285 "ctrlr_loss_timeout_sec": 0, 00:37:40.285 "reconnect_delay_sec": 0, 00:37:40.285 "fast_io_fail_timeout_sec": 0, 00:37:40.285 "psk": "key0", 00:37:40.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:40.285 "hdgst": false, 00:37:40.285 "ddgst": false, 00:37:40.285 "multipath": "multipath" 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "bdev_nvme_set_hotplug", 00:37:40.285 "params": { 00:37:40.285 "period_us": 100000, 00:37:40.285 "enable": false 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "bdev_wait_for_examine" 00:37:40.285 } 00:37:40.285 ] 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "subsystem": "nbd", 00:37:40.285 "config": [] 00:37:40.285 } 00:37:40.285 ] 00:37:40.285 }' 00:37:40.285 11:19:31 keyring_file -- keyring/file.sh@115 -- # killprocess 3586217 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3586217 ']' 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3586217 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3586217 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3586217' 00:37:40.285 killing process with pid 3586217 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@971 -- # kill 3586217 00:37:40.285 Received shutdown signal, test time was about 1.000000 seconds 00:37:40.285 00:37:40.285 Latency(us) 00:37:40.285 [2024-11-06T10:19:31.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.285 [2024-11-06T10:19:31.707Z] =================================================================================================================== 00:37:40.285 [2024-11-06T10:19:31.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@976 -- # wait 3586217 00:37:40.285 11:19:31 keyring_file -- keyring/file.sh@118 -- # bperfpid=3588022 00:37:40.285 11:19:31 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3588022 /var/tmp/bperf.sock 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3588022 ']' 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:40.285 11:19:31 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:40.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:40.285 11:19:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:40.285 11:19:31 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:40.285 "subsystems": [ 00:37:40.285 { 00:37:40.285 "subsystem": "keyring", 00:37:40.285 "config": [ 00:37:40.285 { 00:37:40.285 "method": "keyring_file_add_key", 00:37:40.285 "params": { 00:37:40.285 "name": "key0", 00:37:40.285 "path": "/tmp/tmp.93m3pjtjSt" 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "keyring_file_add_key", 00:37:40.285 "params": { 00:37:40.285 "name": "key1", 00:37:40.285 "path": "/tmp/tmp.UOs9WcATIA" 00:37:40.285 } 00:37:40.285 } 00:37:40.285 ] 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "subsystem": "iobuf", 00:37:40.285 "config": [ 00:37:40.285 { 00:37:40.285 "method": "iobuf_set_options", 00:37:40.285 "params": { 00:37:40.285 "small_pool_count": 8192, 00:37:40.285 "large_pool_count": 1024, 00:37:40.285 "small_bufsize": 8192, 00:37:40.285 "large_bufsize": 135168, 00:37:40.285 "enable_numa": false 00:37:40.285 } 00:37:40.285 } 00:37:40.285 ] 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "subsystem": "sock", 00:37:40.285 "config": [ 00:37:40.285 { 00:37:40.285 "method": "sock_set_default_impl", 00:37:40.285 "params": { 00:37:40.285 "impl_name": "posix" 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "sock_impl_set_options", 00:37:40.285 "params": { 00:37:40.285 "impl_name": "ssl", 00:37:40.285 "recv_buf_size": 4096, 00:37:40.285 "send_buf_size": 4096, 00:37:40.285 "enable_recv_pipe": true, 00:37:40.285 "enable_quickack": false, 00:37:40.285 "enable_placement_id": 0, 00:37:40.285 "enable_zerocopy_send_server": true, 00:37:40.285 "enable_zerocopy_send_client": false, 00:37:40.285 "zerocopy_threshold": 0, 00:37:40.285 "tls_version": 0, 00:37:40.285 "enable_ktls": false 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "sock_impl_set_options", 00:37:40.285 "params": { 00:37:40.285 "impl_name": "posix", 00:37:40.285 "recv_buf_size": 2097152, 00:37:40.285 "send_buf_size": 2097152, 00:37:40.285 "enable_recv_pipe": true, 00:37:40.285 "enable_quickack": false, 00:37:40.285 "enable_placement_id": 0, 00:37:40.285 "enable_zerocopy_send_server": true, 00:37:40.285 "enable_zerocopy_send_client": false, 00:37:40.285 "zerocopy_threshold": 0, 00:37:40.285 "tls_version": 0, 00:37:40.285 "enable_ktls": false 00:37:40.285 } 00:37:40.285 } 00:37:40.285 ] 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "subsystem": "vmd", 00:37:40.285 "config": [] 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "subsystem": "accel", 00:37:40.285 "config": [ 00:37:40.285 { 00:37:40.285 "method": "accel_set_options", 00:37:40.285 "params": { 00:37:40.285 "small_cache_size": 128, 00:37:40.285 "large_cache_size": 16, 00:37:40.285 "task_count": 2048, 00:37:40.285 "sequence_count": 2048, 00:37:40.285 "buf_count": 2048 00:37:40.285 } 00:37:40.285 } 00:37:40.285 ] 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "subsystem": "bdev", 00:37:40.285 "config": [ 00:37:40.285 { 00:37:40.285 "method": "bdev_set_options", 00:37:40.285 "params": { 00:37:40.285 "bdev_io_pool_size": 65535, 00:37:40.285 "bdev_io_cache_size": 256, 00:37:40.285 "bdev_auto_examine": true, 00:37:40.285 "iobuf_small_cache_size": 128, 00:37:40.285 "iobuf_large_cache_size": 16 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "bdev_raid_set_options", 00:37:40.285 "params": { 00:37:40.285 "process_window_size_kb": 1024, 00:37:40.285 "process_max_bandwidth_mb_sec": 0 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "bdev_iscsi_set_options", 00:37:40.285 "params": { 00:37:40.285 "timeout_sec": 30 00:37:40.285 } 00:37:40.285 }, 00:37:40.285 { 00:37:40.285 "method": "bdev_nvme_set_options", 00:37:40.285 "params": { 00:37:40.285 "action_on_timeout": "none", 00:37:40.285 "timeout_us": 0, 00:37:40.285 "timeout_admin_us": 0, 00:37:40.285 "keep_alive_timeout_ms": 10000, 00:37:40.285 "arbitration_burst": 0, 00:37:40.285 "low_priority_weight": 0, 00:37:40.285 "medium_priority_weight": 0, 00:37:40.285 "high_priority_weight": 0, 00:37:40.285 "nvme_adminq_poll_period_us": 10000, 00:37:40.285 "nvme_ioq_poll_period_us": 0, 00:37:40.285 "io_queue_requests": 512, 00:37:40.285 "delay_cmd_submit": true, 00:37:40.285 "transport_retry_count": 4, 00:37:40.285 "bdev_retry_count": 3, 00:37:40.285 "transport_ack_timeout": 0, 00:37:40.285 "ctrlr_loss_timeout_sec": 0, 00:37:40.285 "reconnect_delay_sec": 0, 00:37:40.285 "fast_io_fail_timeout_sec": 0, 00:37:40.285 "disable_auto_failback": false, 00:37:40.285 "generate_uuids": false, 00:37:40.285 "transport_tos": 0, 00:37:40.285 "nvme_error_stat": false, 00:37:40.285 "rdma_srq_size": 0, 00:37:40.285 "io_path_stat": false, 00:37:40.285 "allow_accel_sequence": false, 00:37:40.285 "rdma_max_cq_size": 0, 00:37:40.285 "rdma_cm_event_timeout_ms": 0, 00:37:40.285 "dhchap_digests": [ 00:37:40.285 "sha256", 00:37:40.285 "sha384", 00:37:40.285 "sha512" 00:37:40.285 ], 00:37:40.285 "dhchap_dhgroups": [ 00:37:40.285 "null", 00:37:40.285 "ffdhe2048", 00:37:40.285 "ffdhe3072", 00:37:40.285 "ffdhe4096", 00:37:40.285 "ffdhe6144", 00:37:40.285 "ffdhe8192" 00:37:40.285 ] 00:37:40.286 } 00:37:40.286 }, 00:37:40.286 { 00:37:40.286 "method": "bdev_nvme_attach_controller", 00:37:40.286 "params": { 00:37:40.286 "name": "nvme0", 00:37:40.286 "trtype": "TCP", 00:37:40.286 "adrfam": "IPv4", 00:37:40.286 "traddr": "127.0.0.1", 00:37:40.286 "trsvcid": "4420", 00:37:40.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.286 "prchk_reftag": false, 00:37:40.286 "prchk_guard": false, 00:37:40.286 "ctrlr_loss_timeout_sec": 0, 00:37:40.286 "reconnect_delay_sec": 0, 00:37:40.286 "fast_io_fail_timeout_sec": 0, 00:37:40.286 "psk": "key0", 00:37:40.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:40.286 "hdgst": false, 00:37:40.286 "ddgst": false, 00:37:40.286 "multipath": "multipath" 00:37:40.286 } 00:37:40.286 }, 00:37:40.286 { 00:37:40.286 "method": "bdev_nvme_set_hotplug", 00:37:40.286 "params": { 00:37:40.286 "period_us": 100000, 00:37:40.286 "enable": false 00:37:40.286 } 00:37:40.286 }, 00:37:40.286 { 00:37:40.286 "method": "bdev_wait_for_examine" 00:37:40.286 } 00:37:40.286 ] 00:37:40.286 }, 00:37:40.286 { 00:37:40.286 "subsystem": "nbd", 00:37:40.286 "config": [] 00:37:40.286 } 00:37:40.286 ] 00:37:40.286 }' 00:37:40.545 [2024-11-06 11:19:31.731884] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:37:40.545 [2024-11-06 11:19:31.731942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588022 ] 00:37:40.545 [2024-11-06 11:19:31.815876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.545 [2024-11-06 11:19:31.845078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.805 [2024-11-06 11:19:31.987958] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:41.376 11:19:32 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:41.376 11:19:32 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:41.376 11:19:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:41.376 11:19:32 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:41.376 11:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.376 11:19:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:41.376 11:19:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:41.376 11:19:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:41.376 11:19:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.376 11:19:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.376 11:19:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:41.376 11:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.636 11:19:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:41.636 11:19:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:41.636 11:19:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.636 11:19:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:41.636 11:19:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.636 11:19:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:41.636 11:19:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.636 11:19:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:41.636 11:19:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:41.636 11:19:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:41.636 11:19:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:41.896 11:19:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:41.896 11:19:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:41.896 11:19:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.93m3pjtjSt /tmp/tmp.UOs9WcATIA 00:37:41.896 11:19:33 keyring_file -- keyring/file.sh@20 -- # killprocess 3588022 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3588022 ']' 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3588022 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3588022 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3588022' 00:37:41.897 killing process with pid 3588022 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@971 -- # kill 3588022 00:37:41.897 Received shutdown signal, test time was about 1.000000 seconds 00:37:41.897 00:37:41.897 Latency(us) 00:37:41.897 [2024-11-06T10:19:33.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.897 [2024-11-06T10:19:33.319Z] =================================================================================================================== 00:37:41.897 [2024-11-06T10:19:33.319Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:41.897 11:19:33 keyring_file -- common/autotest_common.sh@976 -- # wait 3588022 00:37:42.157 11:19:33 keyring_file -- keyring/file.sh@21 -- # killprocess 3586087 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3586087 ']' 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3586087 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3586087 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3586087' 00:37:42.157 killing process with pid 3586087 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@971 -- # kill 3586087 00:37:42.157 11:19:33 keyring_file -- common/autotest_common.sh@976 -- # wait 3586087 00:37:42.417 00:37:42.417 real 0m11.815s 00:37:42.417 user 0m28.520s 00:37:42.417 sys 0m2.567s 00:37:42.417 11:19:33 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:42.417 11:19:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:42.417 ************************************ 00:37:42.417 END TEST keyring_file 00:37:42.417 ************************************ 00:37:42.417 11:19:33 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:42.417 11:19:33 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:42.417 11:19:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:42.417 11:19:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:42.417 11:19:33 -- common/autotest_common.sh@10 -- # set +x 00:37:42.417 ************************************ 00:37:42.417 START TEST keyring_linux 00:37:42.417 ************************************ 00:37:42.417 11:19:33 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:42.417 Joined session keyring: 951985373 00:37:42.417 * Looking for test storage... 00:37:42.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.678 --rc genhtml_branch_coverage=1 00:37:42.678 --rc genhtml_function_coverage=1 00:37:42.678 --rc genhtml_legend=1 00:37:42.678 --rc geninfo_all_blocks=1 00:37:42.678 --rc geninfo_unexecuted_blocks=1 00:37:42.678 00:37:42.678 ' 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.678 --rc genhtml_branch_coverage=1 00:37:42.678 --rc genhtml_function_coverage=1 00:37:42.678 --rc genhtml_legend=1 00:37:42.678 --rc geninfo_all_blocks=1 00:37:42.678 --rc geninfo_unexecuted_blocks=1 00:37:42.678 00:37:42.678 ' 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.678 --rc genhtml_branch_coverage=1 00:37:42.678 --rc genhtml_function_coverage=1 00:37:42.678 --rc genhtml_legend=1 00:37:42.678 --rc geninfo_all_blocks=1 00:37:42.678 --rc geninfo_unexecuted_blocks=1 00:37:42.678 00:37:42.678 ' 00:37:42.678 11:19:33 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.678 --rc genhtml_branch_coverage=1 00:37:42.678 --rc genhtml_function_coverage=1 00:37:42.678 --rc genhtml_legend=1 00:37:42.678 --rc geninfo_all_blocks=1 00:37:42.678 --rc geninfo_unexecuted_blocks=1 00:37:42.678 00:37:42.678 ' 00:37:42.678 11:19:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:42.678 11:19:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.678 11:19:33 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.678 11:19:33 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.678 11:19:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.678 11:19:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.678 11:19:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.678 11:19:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:42.679 11:19:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:42.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:42.679 11:19:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:42.679 11:19:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:42.679 11:19:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:42.679 11:19:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:42.679 11:19:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:42.679 11:19:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:42.679 11:19:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:42.679 11:19:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:42.679 11:19:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:42.679 11:19:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:42.679 11:19:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:42.679 11:19:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:42.679 11:19:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:42.679 11:19:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:42.679 /tmp/:spdk-test:key0 00:37:42.679 11:19:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:42.679 11:19:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:42.679 11:19:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:42.679 11:19:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:42.679 11:19:34 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:42.679 11:19:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:42.679 11:19:34 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:42.679 11:19:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:42.679 /tmp/:spdk-test:key1 00:37:42.679 11:19:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3588459 00:37:42.679 11:19:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3588459 00:37:42.679 11:19:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:42.679 11:19:34 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3588459 ']' 00:37:42.679 11:19:34 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.679 11:19:34 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:42.679 11:19:34 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.679 11:19:34 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:42.679 11:19:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:42.939 [2024-11-06 11:19:34.143918] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:37:42.939 [2024-11-06 11:19:34.143999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588459 ] 00:37:42.939 [2024-11-06 11:19:34.220076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.939 [2024-11-06 11:19:34.262406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.509 11:19:34 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:43.509 11:19:34 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:37:43.509 11:19:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:43.509 11:19:34 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.509 11:19:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:43.509 [2024-11-06 11:19:34.918143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.770 null0 00:37:43.770 [2024-11-06 11:19:34.950196] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:43.770 [2024-11-06 11:19:34.950606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:43.770 11:19:34 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.770 11:19:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:43.770 847211751 00:37:43.770 11:19:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:43.770 249743200 00:37:43.770 11:19:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3588796 00:37:43.770 11:19:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3588796 /var/tmp/bperf.sock 00:37:43.770 11:19:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:43.770 11:19:34 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3588796 ']' 00:37:43.770 11:19:34 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:43.770 11:19:34 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:43.770 11:19:34 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:43.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:43.770 11:19:34 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:43.770 11:19:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:43.770 [2024-11-06 11:19:35.027261] Starting SPDK v25.01-pre git sha1 f0e4b91ff / DPDK 24.03.0 initialization... 00:37:43.770 [2024-11-06 11:19:35.027311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588796 ] 00:37:43.770 [2024-11-06 11:19:35.111151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.770 [2024-11-06 11:19:35.141006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.710 11:19:35 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:44.710 11:19:35 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:37:44.710 11:19:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:44.710 11:19:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:44.710 11:19:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:44.710 11:19:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:44.970 11:19:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:44.970 11:19:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:44.970 [2024-11-06 11:19:36.353082] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:45.230 nvme0n1 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:45.230 11:19:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:45.230 11:19:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:45.230 11:19:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:45.230 11:19:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:45.230 11:19:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.490 11:19:36 keyring_linux -- keyring/linux.sh@25 -- # sn=847211751 00:37:45.490 11:19:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:45.490 11:19:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:45.490 11:19:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 847211751 == \8\4\7\2\1\1\7\5\1 ]] 00:37:45.490 11:19:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 847211751 00:37:45.490 11:19:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:45.490 11:19:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:45.490 Running I/O for 1 seconds... 00:37:46.871 16262.00 IOPS, 63.52 MiB/s 00:37:46.871 Latency(us) 00:37:46.871 [2024-11-06T10:19:38.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.871 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:46.871 nvme0n1 : 1.01 16262.36 63.52 0.00 0.00 7837.93 1897.81 9011.20 00:37:46.871 [2024-11-06T10:19:38.293Z] =================================================================================================================== 00:37:46.871 [2024-11-06T10:19:38.293Z] Total : 16262.36 63.52 0.00 0.00 7837.93 1897.81 9011.20 00:37:46.871 { 00:37:46.871 "results": [ 00:37:46.871 { 00:37:46.871 "job": "nvme0n1", 00:37:46.871 "core_mask": "0x2", 00:37:46.871 "workload": "randread", 00:37:46.871 "status": "finished", 00:37:46.871 "queue_depth": 128, 00:37:46.871 "io_size": 4096, 00:37:46.871 "runtime": 1.007849, 00:37:46.871 "iops": 16262.356761776813, 00:37:46.871 "mibps": 63.524831100690676, 00:37:46.871 "io_failed": 0, 00:37:46.871 "io_timeout": 0, 00:37:46.871 "avg_latency_us": 7837.9309139719335, 00:37:46.871 "min_latency_us": 1897.8133333333333, 00:37:46.871 "max_latency_us": 9011.2 00:37:46.871 } 00:37:46.871 ], 00:37:46.871 "core_count": 1 00:37:46.871 } 00:37:46.871 11:19:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:46.871 11:19:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:46.871 11:19:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:46.871 11:19:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:46.871 11:19:38 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:46.871 11:19:38 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:46.871 11:19:38 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:46.871 11:19:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.871 11:19:38 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:46.871 11:19:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.871 11:19:38 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:46.871 11:19:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:47.132 [2024-11-06 11:19:38.441070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:47.132 [2024-11-06 11:19:38.442006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c57480 (107): Transport endpoint is not connected 00:37:47.132 [2024-11-06 11:19:38.443002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c57480 (9): Bad file descriptor 00:37:47.132 [2024-11-06 11:19:38.444004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:47.132 [2024-11-06 11:19:38.444016] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:47.132 [2024-11-06 11:19:38.444022] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:47.132 [2024-11-06 11:19:38.444029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:47.132 request: 00:37:47.132 { 00:37:47.132 "name": "nvme0", 00:37:47.132 "trtype": "tcp", 00:37:47.132 "traddr": "127.0.0.1", 00:37:47.132 "adrfam": "ipv4", 00:37:47.132 "trsvcid": "4420", 00:37:47.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:47.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:47.132 "prchk_reftag": false, 00:37:47.132 "prchk_guard": false, 00:37:47.132 "hdgst": false, 00:37:47.132 "ddgst": false, 00:37:47.132 "psk": ":spdk-test:key1", 00:37:47.132 "allow_unrecognized_csi": false, 00:37:47.132 "method": "bdev_nvme_attach_controller", 00:37:47.132 "req_id": 1 00:37:47.132 } 00:37:47.132 Got JSON-RPC error response 00:37:47.132 response: 00:37:47.132 { 00:37:47.132 "code": -5, 00:37:47.132 "message": "Input/output error" 00:37:47.132 } 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@33 -- # sn=847211751 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 847211751 00:37:47.132 1 links removed 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@33 -- # sn=249743200 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 249743200 00:37:47.132 1 links removed 00:37:47.132 11:19:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3588796 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3588796 ']' 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3588796 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3588796 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3588796' 00:37:47.132 killing process with pid 3588796 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@971 -- # kill 3588796 00:37:47.132 Received shutdown signal, test time was about 1.000000 seconds 00:37:47.132 00:37:47.132 Latency(us) 00:37:47.132 [2024-11-06T10:19:38.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.132 [2024-11-06T10:19:38.554Z] =================================================================================================================== 00:37:47.132 [2024-11-06T10:19:38.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:47.132 11:19:38 keyring_linux -- common/autotest_common.sh@976 -- # wait 3588796 00:37:47.392 11:19:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3588459 00:37:47.392 11:19:38 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3588459 ']' 00:37:47.392 11:19:38 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3588459 00:37:47.392 11:19:38 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:37:47.392 11:19:38 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:47.392 11:19:38 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3588459 00:37:47.392 11:19:38 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:47.392 11:19:38 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:47.393 11:19:38 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3588459' 00:37:47.393 killing process with pid 3588459 00:37:47.393 11:19:38 keyring_linux -- common/autotest_common.sh@971 -- # kill 3588459 00:37:47.393 11:19:38 keyring_linux -- common/autotest_common.sh@976 -- # wait 3588459 00:37:47.653 00:37:47.653 real 0m5.185s 00:37:47.653 user 0m9.597s 00:37:47.653 sys 0m1.380s 00:37:47.653 11:19:38 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:47.653 11:19:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:47.653 ************************************ 00:37:47.653 END TEST keyring_linux 00:37:47.653 ************************************ 00:37:47.653 11:19:38 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:47.653 11:19:38 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:47.653 11:19:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:47.653 11:19:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:47.653 11:19:38 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:47.653 11:19:38 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:47.653 11:19:38 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:47.653 11:19:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:47.653 11:19:38 -- common/autotest_common.sh@10 -- # set +x 00:37:47.653 11:19:38 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:47.653 11:19:38 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:37:47.653 11:19:38 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:37:47.653 11:19:38 -- common/autotest_common.sh@10 -- # set +x 00:37:55.788 INFO: APP EXITING 00:37:55.788 INFO: killing all VMs 00:37:55.788 INFO: killing vhost app 00:37:55.788 INFO: EXIT DONE 00:37:58.330 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:58.330 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:58.590 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:58.590 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:58.854 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:58.854 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:58.854 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:58.854 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:02.157 Cleaning 00:38:02.157 Removing: /var/run/dpdk/spdk0/config 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:02.157 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:02.157 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:02.157 Removing: /var/run/dpdk/spdk1/config 00:38:02.157 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:02.157 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:02.157 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:02.157 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:02.157 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:02.157 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:02.418 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:02.418 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:02.418 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:02.418 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:02.418 Removing: /var/run/dpdk/spdk2/config 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:02.418 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:02.418 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:02.418 Removing: /var/run/dpdk/spdk3/config 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:02.418 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:02.418 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:02.418 Removing: /var/run/dpdk/spdk4/config 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:02.418 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:02.418 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:02.418 Removing: /dev/shm/bdev_svc_trace.1 00:38:02.418 Removing: /dev/shm/nvmf_trace.0 00:38:02.418 Removing: /dev/shm/spdk_tgt_trace.pid3013867 00:38:02.418 Removing: /var/run/dpdk/spdk0 00:38:02.418 Removing: /var/run/dpdk/spdk1 00:38:02.418 Removing: /var/run/dpdk/spdk2 00:38:02.418 Removing: /var/run/dpdk/spdk3 00:38:02.418 Removing: /var/run/dpdk/spdk4 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3012206 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3013867 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3014549 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3015613 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3015927 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3017053 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3017330 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3017656 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3018719 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3019390 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3019787 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3020182 00:38:02.418 Removing: /var/run/dpdk/spdk_pid3020594 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3020997 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3021355 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3021501 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3021781 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3022869 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3026436 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3026686 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3027028 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3027183 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3027618 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3027895 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3028269 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3028603 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3028864 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3028982 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3029342 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3029362 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3029946 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3030163 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3030560 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3035104 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3040474 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3052608 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3053753 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3059124 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3059496 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3064559 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3071486 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3074727 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3087245 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3098018 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3100253 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3101334 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3122588 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3127365 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3183839 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3190232 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3197077 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3204969 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3204980 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3205984 00:38:02.679 Removing: /var/run/dpdk/spdk_pid3207003 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3208014 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3208680 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3208693 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3209021 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3209188 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3209329 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3210361 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3211360 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3212476 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3213145 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3213156 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3213489 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3215394 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3216475 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3226455 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3262277 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3267696 00:38:02.680 Removing: /var/run/dpdk/spdk_pid3269686 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3271835 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3272038 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3272070 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3272387 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3272934 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3275128 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3276212 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3276589 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3279302 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3280012 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3280796 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3285774 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3292464 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3292465 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3292466 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3297007 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3307640 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3312461 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3319818 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3321484 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3323130 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3324864 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3330244 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3335701 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3340417 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3349499 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3349511 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3354640 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3354895 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3355272 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3355719 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3355850 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3361738 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3362346 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3367780 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3370977 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3377597 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3384127 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3394038 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3402544 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3402579 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3426180 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3427021 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3427802 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3428488 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3429542 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3430227 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3430922 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3431601 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3436830 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3437031 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3444249 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3444418 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3450882 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3455929 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3467772 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3468466 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3473609 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3474033 00:38:02.941 Removing: /var/run/dpdk/spdk_pid3478989 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3485644 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3488679 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3500814 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3511404 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3513466 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3515068 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3534670 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3539325 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3542592 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3549765 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3549887 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3555843 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3558101 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3560393 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3561798 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3564428 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3566179 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3576064 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3576705 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3577370 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3580218 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3580683 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3581329 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3586087 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3586217 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3588022 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3588459 00:38:03.203 Removing: /var/run/dpdk/spdk_pid3588796 00:38:03.203 Clean 00:38:03.203 11:19:54 -- common/autotest_common.sh@1451 -- # return 0 00:38:03.204 11:19:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:03.204 11:19:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:03.204 11:19:54 -- common/autotest_common.sh@10 -- # set +x 00:38:03.204 11:19:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:03.204 11:19:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:03.204 11:19:54 -- common/autotest_common.sh@10 -- # set +x 00:38:03.465 11:19:54 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:03.465 11:19:54 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:03.465 11:19:54 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:03.465 11:19:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:03.465 11:19:54 -- spdk/autotest.sh@394 -- # hostname 00:38:03.465 11:19:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:03.465 geninfo: WARNING: invalid characters removed from testname! 00:38:30.056 11:20:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:31.969 11:20:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:33.880 11:20:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:35.265 11:20:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:37.174 11:20:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:38.558 11:20:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:40.469 11:20:31 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:40.469 11:20:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:40.469 11:20:31 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:40.469 11:20:31 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:40.469 11:20:31 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:40.469 11:20:31 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:40.469 + [[ -n 2927165 ]] 00:38:40.469 + sudo kill 2927165 00:38:40.480 [Pipeline] } 00:38:40.494 [Pipeline] // stage 00:38:40.498 [Pipeline] } 00:38:40.512 [Pipeline] // timeout 00:38:40.516 [Pipeline] } 00:38:40.530 [Pipeline] // catchError 00:38:40.534 [Pipeline] } 00:38:40.547 [Pipeline] // wrap 00:38:40.552 [Pipeline] } 00:38:40.564 [Pipeline] // catchError 00:38:40.572 [Pipeline] stage 00:38:40.574 [Pipeline] { (Epilogue) 00:38:40.586 [Pipeline] catchError 00:38:40.587 [Pipeline] { 00:38:40.599 [Pipeline] echo 00:38:40.600 Cleanup processes 00:38:40.605 [Pipeline] sh 00:38:40.957 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:40.957 3601751 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:41.000 [Pipeline] sh 00:38:41.309 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:41.309 ++ grep -v 'sudo pgrep' 00:38:41.309 ++ awk '{print $1}' 00:38:41.309 + sudo kill -9 00:38:41.309 + true 00:38:41.322 [Pipeline] sh 00:38:41.608 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:53.850 [Pipeline] sh 00:38:54.137 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:54.138 Artifacts sizes are good 00:38:54.151 [Pipeline] archiveArtifacts 00:38:54.158 Archiving artifacts 00:38:54.285 [Pipeline] sh 00:38:54.571 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:54.586 [Pipeline] cleanWs 00:38:54.596 [WS-CLEANUP] Deleting project workspace... 00:38:54.596 [WS-CLEANUP] Deferred wipeout is used... 00:38:54.604 [WS-CLEANUP] done 00:38:54.606 [Pipeline] } 00:38:54.622 [Pipeline] // catchError 00:38:54.634 [Pipeline] sh 00:38:54.921 + logger -p user.info -t JENKINS-CI 00:38:54.931 [Pipeline] } 00:38:54.943 [Pipeline] // stage 00:38:54.949 [Pipeline] } 00:38:54.970 [Pipeline] // node 00:38:54.982 [Pipeline] End of Pipeline 00:38:55.020 Finished: SUCCESS